Thursday, December 29, 2011

Sentinel bargaining

Iran has supposedly captured an intact US RQ-170 Sentinel drone by jamming the signal used to control it, and then feeding it false GPS information that caused it to land in Iran instead of returning home:
Iran's success in determining the moment of the unmanned vehicle's entry and its success in transferring command of the drone's movements from US to Iranian control systems is an exceptional intelligence and technological feat in terms of modern electronic warfare.
Now, if you were Iran, why would you tell America how you captured its drone, instead of keeping it a secret and perhaps capturing a few more or at least delaying America's progress in developing countermeasures?  Maybe Iran cannot resist bragging about its conquest, like the stereotypical comic book villain. But might there be another reason?

The Sentinel is a technological jackpot for non-American military powers.  But Iran itself does not have the technology to fully exploit its find.  The Sentinel is more valuable to Russia and China, and Iran will probably end up selling/trading it to one of them.  Negotiations have already begun:
Moscow sources disclose that the price set by Revolutionary Guards commander Gen. Ali Jaafari includes advanced nuclear and missile technology, especially systems using solid fuel, the last word on centrifuges for enriching uranium and the S-300PMU-1 air defense system, which Moscow has consistently refused to sell Tehran... 
Western intelligence watchers keeping track of the Russian and Chinese teams in Tehran have not discovered where the negotiations stand at this time or whether the Iranians have taken on both teams at once or are bargaining with each separately to raise the bidding. 
So, Iran has one good and two interested bidders.  Interestingly, it could make out better if it proactively prevents itself from obtaining another Sentinel in the future!

Suppose Iran, Russia, and China respectively value the Sentinel at $100, $1000, and $1200, and suppose Iran is a relatively weak bargainer.  If Iran was just dealing with China, it would sell for somewhere between their respective valuations of $100 and $1200 -- closer to $100 since it is a weak bargainer.  But with Russia in play, Iran is sure to fetch between $1000 and $1200 from China.  (If less, Russia would surely outbid).  Further, let's suppose that Iran is such a weak bargainer that it only gets $1000.

Now suppose this happens -- the Sentinel goes to China for $1000.  And suppose that in the near future, Iran captures another Sentinel.  Surely it will want to trade this one to the remaining interested bidder (Russia), for a price between $100 and $1000.  As a weak bargainer, let's say Iran actually gets $100.

But of course, if Russia anticipates the capture of this second Sentinel, it won't be willing to bid over $100 in the first round, and so it won't force China's bid up to $1000.  Russia may as well enter a bid of $0 in the first round, in which case China can get the first Sentinel for just $100 (since Iran is such a weak bargainer).  So with two Sentinels, Iran could end up making much less than the $1000 it gets from one Sentinel! (In this extreme example it gets just $200 total).

This is equivalent to a durable goods monopoly problem, where Iran is a monopolistic "producer" of Sentinels.  It could be that it maximizes profit by restricting supply to 1 Sentinel, but it faces competition from its future self.  If so, it needs a way of committing itself to not selling more Sentinels in the future.  Which it could accomplish by bragging openly about the precise way in which it captured the first Sentinel!

Monday, November 28, 2011

More Charity Auctions

Check out the post over at This Young Economist for some interesting additional facets of charity auctions.

I wonder if more value typically flows to the charity through the initial donation of items or through subsequent inflated bidding on those items.

Meteors and Gender Differences

A meteor falls out of the sky and destroys your house.  "Why me? Life isn't fair!" you might be heard to cry.  There would probably be a lot of nodding from all directions.

Well yeah, it's pretty unfair from the perspective of after the meteor hit your house.  Everyone on the face of the planet has a non-meteor-kablooeyed house except you!  But on the other hand, before it happened, everyone had an equal chance of destruction.  From the perspective of beforehand, everyone is in the exact same situation, so it is in some sense fair.

The basic idea of fairness is something like "equality between people."  But equality from what perspective?


To accommodate randomness, let's say a situation is fair if two people are drawing from the same distribution. The question is: which distribution are we requiring to be the same?  The distribution of outcomes conditional on what point in time, or more generally what facts about the world?  What do we take as given, and what is still up in the air?  Fairness depends entirely on what you condition on.

Disagreement here is widespread.  That's okay.  But once in a while, it's nice to step back and notice what we're conditioning on, and perhaps question it.



To that end, here's a thought exercise: Why do we worry so much about unequal outcomes between groups of people?  Let's take a step back.

Consider a world where each person simply has a life outcome and a label.  You can think of the life outcome as income or any other measure of "success," and the label could be something like race or sex.  Our moral starting point is that the label shouldn't matter; skin color or gender should be irrelevant to how much we care about a person.

Now, there is a joint distribution over outcome and label.  Associated with it, we have densities for outcome and label alone, as well as conditional densities for each given the other.

Now consider these two scenarios:
  • Scenario 1: The label is drawn first, and then the outcome is drawn from the conditional distribution (conditional on the label that was already drawn).
  • Scenario 2: Life outcome is drawn first, and then the label is drawn conditional on the outcome.  
Mathematically, of course, the end product is the same in either case, namely (outcome, label) pairs drawn from the joint distribution.  (Or, if we are label-blind, we just see a bunch of people drawn from the same outcome distribution in each case).  But to most people, I think scenario 1 seems potentially much less fair than scenario 2.  Why?

Because fairness depends on what you condition on. In the first scenario it seems sensible to condition on the label, and say: The label shouldn't matter, so the distribution of outcomes conditional on the label should be the same for different labels. But in the second scenario, everyone draws from the same distribution of outcomes, and afterward there is an irrelevant draw from a label distribution. (In this world, people don't care about the label per se, so once their outcome is drawn, nothing else matters).

But here's the kicker.  Should the scenario we're in really make any difference to fairness?  Do we really want our notions of fairness to depend on things like the order events actually unfold?  If the outcome is what matters, and the outcome is the same in each scenario, what does it matter how we got there?

Personally, I don't want to care about anything I don't care about!  I care about the pool of realized outcomes, not labels, and to be label-blind means to have no preference over how labels are split among those realizations.  I don't want to get sucked into finding differences between scenarios 1 and 2; I want my notion of fairness to be robust to the order of irrelevant events such as the assignment of irrelevant labels.



Is it unfair to women that there aren't so many good athletic career opportunities?  As a point of fairness, how do they compare to the 99.99% of men who just aren't good enough at sports?  Is there a difference between a woman who can't be an NBA all-star because she's drawing from a distribution that doesn't have support over the upper tail of the rankings, versus a man who can't be an NBA all-star because he happens to just not be awesome at basketball?


Tell me, what do you think?  Is there a coherent way to argue that labels should and shouldn't matter at the same time?


[Discrimination, by the way, is orthogonal to this post.  To the extent that unequal outcomes across groups is evidence of discrimination, you might be upset indeed, the above notwithstanding!  But even putting discrimination aside, many people will still observe the unfairness of inequality across groups.  Is there something else going on here?]

Saturday, November 26, 2011

Charity Auctions

Why are charity auctions insanely successful?  A few possibilities:

  1. They attract people who aren't regular auction-goers.  (These people don't know to correct for the winner's curse and so forth, where applicable).  
  2. People are happy to give some money to charity (and so will bid higher than they would have).  
  3. Charity auctions give people an excuse to bid the way they're really dying to bid in a normal auction, without fear of reprisal (from themselves or others) that they were being irrational.
Now, I wonder how efficient the outcomes of charity auctions turn out to be.  That is, do the people who value the items the most tend to end up with them?  Or do the items go to those who are most willing to give to charity?  The answer would tell us something interesting about which of the reasons above is playing the bigger role.

Monday, November 21, 2011

Perspective

I'm sure this is going to be linked from all over the place, but if you haven't seen today's xkcd comic...

I might have to buy the poster.

Monday, November 14, 2011

Strong feelings undeserved and overserved

I had a good friend in elementary school who went away to Israel and came back poisoned with hatred for The Enemy.  It doesn't really matter which side he was on.  If my memory is to be trusted, I didn't much care about the situation in the middle east at the time.  I just remember being sad that my friend had been changed in such an overt and terrible way.

I have a big problem with blind hatred, which unfortunately seems to describe most hatred.  It is possible to understand something and still hate it.  But it's much easier -- and so more common -- to hate something you don't understand.  Because, though the world is a pretty ordered place, we often fail to see that order, and then we tend to assume it isn't there.  And if something makes no sense, we don't have to take it seriously.  We can drop it in the mud and walk all over it without a second thought, because it has no right to exist, making so little sense and all.  And the people who believe it are just as easily dismissed.

We try not to execute people without being pretty sure they're guilty.  Hatred is, I think, a sort of mental execution that we shouldn't be so hasty to dole out in advance of understanding.  How can hatred be so firmly buttressed by the conviction that the enemy is being unreasonable, when we don't even understand why the enemy is behaving in such a manner?  Extreme reactions should require a high degree of confidence.

So, what bothers me is hatred in advance of understanding.  What bothers me is the depressingly default assumption that the burden of understanding falls on the person (or thing) to be understood.  That it is their job to be understood by us, not our job to understand them.  That if we do not understand them, it must be that their position doesn't make sense, not that we are failing to make sense of their position.

Let me inject a little perspective.  Since the dawn of man, we have looked upon the world and seen no shortage of black boxes.  The world was impossibly mysterious at first.  We just didn't understand its workings.  In retrospect, this lack of understanding was a property of our brains, not the world.  But instead of recognizing that, we took our inability to explain things as strong evidence that they were, in fact, inexplicable.  There was no underlying order that we just hadn't discovered yet; rather, things were inherently mysterious.  Of course we gave names to the mysterious things -- called them "magic" and so forth -- but you can't actually demystify a cat by calling it Mittens.  Cats are complicated, you know.

Of course there are still many black boxes to be cracked open, although by now, in the realm of science at least, we've pretty much got the idea that the boxes have stuff in them even before they're opened.  We understand that discovering an unopened black box should probably weaken our confidence in our understanding of the world, not readily convince us that the world actually makes no sense.

But it seems we haven't quite absorbed the analogous moral on the topic of why people do what they do.

Suppose half the world believes A and the other half believes not-A.  Quite apart from the question of whether A or not-A is the truth, there is the question of why a person might think one or the other is true.  And if we can't understand why people on the other side of the table think what they think, our brains have failed.  It should weaken our confidence in our true understanding of the situation, not convince us that those people actually make no sense.

And when we fail to understand the other side, we are in no position to make a legitimate judgment about what ought to happen, or to hate anyone for supporting an outcome we don't even know how to reasonably evaluate.

Of course, transactions and debates and negotiations and wars can occur without either side understanding the other.   And I don't mean to say that's inherently wrong; it will sometimes be optimal to go to war in advance of full understanding.  But confusion should make us uncomfortable, not somehow magically bolster (!) our eagerness.

[This post inspired by a depressing display of blind hatred in a Facebook debate between several people from different countries in the middle east.]

Tuesday, November 8, 2011

Punkin Chunkin

Via MR, a progress report on the pumpkin launching ability of our great nation.  Why has the max distance launched leveled off at about 4000 feet in recent years?

I'm not exactly sure why, but I once actually attended the Punkin Chunkin event.  There were several classes of competition, from catapults and trebuchets all the way up to pneumatic cannons.  If memory serves, the cannons were arrayed at one end of a large field, and would take turns firing these white pumpkins.  Someone would rush to where the pumpkin asploded, and the distance would be measured, possibly with some type of laser device.

Only, the field wasn't long enough.  Some of the pumpkins went so far that they were lost in the woods at the other end of the field.  They didn't count, sadly.

So maybe 4000 feet is just the length of the field.  Or maybe this high school memory has decayed way beyond the point of trustworthiness.

Monday, October 31, 2011

The murderous philanthropist

Let's put on our Thought Experiment caps and consider the case of the murderous philanthropist.  Here is his MO: He finds people in need, and asks them if they would like a turn in his Strike It Rich machine.  "The machine is simple," he explains.  "You strap yourself in, and press this big red button.  With probability one half, an arm swings down and gives you a million dollars.  Otherwise, though, you are instantly and painlessly executed by the machine."

To be rich or not to be, that is the question.  You may find this horrifying or not, depending on your outlook.  It's certainly far beyond selling organs.  But here's a fact: In the real world, many people would voluntarily take the offer...

  • Perhaps people who are practically on the verge of death anyway.  
  • Perhaps people who have many friends and family members, or a really good cause they value more than their own life, that could really benefit from the money.

Or, perhaps, people who think like this: a minute from now, I will either exist or not.  If I don't exist, I won't experience anything, and I will have no preferences over anything.  I can only care about anything conditional on existing, and conditional on existing, I will be rich.

Buy it?  No?  Here, try these on for size:

  • Maybe, instead of a machine, the philanthropist offers to visit clients in the night, and either dump a million dollars over their bed or kill them quietly in their sleep.  From the perspective of subjective experience, it is true that when they wake up, they will wake up rich.
  • Or maybe the murderous philanthropist is God -- not an inappropriate title for many of his most popular incarnations, by the way -- and He plays this game on everyone's behalf before they yet exist.  In such a world, everyone who comes to exist, lives a very nice existence.  But other "entities" that have not yet attained personhood are deprived of that future.
  • Or maybe someday our civilization gains the power to revive people who died long ago.  (This one is supposed to get rid of the "murderous" part, while maintaining the prior existence of the parties involved).  Does anything change when the (not so murderous) philanthropist works his magic over the decision of who to revive now?    To continue in the vein of the thought experiment, imagine that the potentially revived souls know this will happen in advance, and have signed up for a 50-50 shot at awesome life or continued death.  Is this better or worse than reviving all of them but not giving them awesome lives?

I'm not going to argue about "the right way to think" here.  But if you feel differently about this scenario in its various incarnations, it's probably not a bad idea to think about it.  I know I have mixed feelings.  The second-to-last scenario is empirically indistinguishable from the current state of the world, and I don't know what sort of existence rule I would most prefer such a god to implement.

We care about the quality of life on this planet.  But should we care about something like the sum total of life quality across people, or something more like average life quality conditional on existing?  That is, do we prefer a bunch of people with okay lives, or do we want fewer people with better lives?

When people run a Rawlsian veil of ignorance-type thought experiment of "what kind of world would I want to live in, if I didn't yet know who I'd be," they are implicitly conditioning on existing in that world.  This is inadequate for policy decisions that affect who lives and who doesn't, unless you're comfortable with conditioning on existence.  If you're not comfortable with that, you'll have to broaden your thought experiment to include the probability of existing in the first place.  To me there's something attractive about having more people with lower average wellbeing, but more total wellbeing.  But there's also something compelling about having high wellbeing for anyone who's able to appreciate high wellbeing (i.e. anyone who exists).  In the end I think most people prefer something in between the two extremes, but at the same time, it is easy to forget that a spectrum exists, from issue to issue.  You could easily get stuck at one end by accident, if you weren't careful.

When we talk about cattle treatment, it's so critical to ask whether we'd prefer a world with fewer but happier cows, or more but worse off cows.  Almost everyone seems to condition on existence here, although one gets the sense that they haven't explicitly thought about (or even noticed) the inevitable tradeoff.

Do you have a problem with the Strike It Rich machine, while you feel no sense of obligation to unborn cows or persons?  You're not committing some sort of logical sin if your way of thinking about these things follows very different logic.  But it's something to be aware of, and perhaps curious about.  Where my answers to the above questions differ, I wonder why that is, and whether I'm really okay with it.

(Many people will say that extant people have "rights" that are not shared by nonentities.  But to lexicographically prioritize extant people seems suspicious and too-convenient.  Nor does this path lead to a complete answer, because it provides no way to rank alternatives in which different subsets of potential people exist).

Monday, October 24, 2011

The adventures of collin rose: Don't tase me bro edition!

Tony points me to this article about two tased college football players, as reported by Missoula Police Sgt. Collin Rose.

Please join me in welcoming Sgt. Collin Rose to the cast of characters here at Economonomics.  He will be taking the lead whenever we encounter a dynamic game played against one's future self.

And because I do not wish to anger the Missoula police department, if you are Sgt. Collin Rose, let me dispel any concerns&confusions you may have.  You are an object of interest only because Collin sounds like columns and Rose sounds like rows just go with it it's a game theory thing.

Friday, October 7, 2011

Jeopardy! strategy

Fellow crossworder joon pahk has now won the last 4 nights of Jeopardy! Will he sweep the week?  I will not presume my typical reader is tapped into the crossword scene, but if you were, you would probably know who joon is, and you would probably be tuning in for these nail-biters.  I must say, Jeopardy is a lot more exciting when you have someone to root for!

Anyway, I have been thinking a little bit about the game theory of Final Jeopardy betting.  If you don't know, contestants (1) learn the category of the question, (2) place bets anywhere from $0 up to all of their current money, (3) hear the question and submit their answers, after which they either gain (if correct) or lose (if incorrect) the amount of their bid.  So in FJ, you can accomplish anything from losing all your money to doubling it...and so can the other players.

As you know, I normally like Rose and Colin for my rows and columns, but let's use Joon and Franny today.  Joon has $14,200 going into Final Jeopardy, while Franny has $17,600.  To simplify this, let's talk about betting "big" or "small," by which I loosely mean betting quantities that do and don't bridge the gap between their scores.  Yes, this is casual...umm let's say betting small means betting $0 and big means betting everything. Just go with it!

Here's a breakdown of the possible outcomes:
  1. If they both bet big, Joon wins only when he's right and she's wrong.
  2. If they both bet small, Joon will never win.
  3. If Joon bets big and Franny bets small, he will win whenever he's right.
  4. If Joon bets small and Franny bets big, he wins whenever she's wrong.
Oh and we aren't done simplifying, no sir.  Let's say that the probability of getting the question right is p for both of them, and let's say those are independent (see below for more on this tho).  And let's say they only care about winning (i.e. not winnings).  Oh man.  Do you feel the powahhhhh?  You know the drill, game theory types!  We just got ourselves one of these:


What can we see from this diagram other than the fact that it is obviously scratchwork?  It shows the payoffs to Joon and Franny, i.e. their probability of winning, when they play various combinations of betting Big and Small.  [If this is complete gibberish to you and you have a powerful urge to change that...]

Unsurprisingly, the unique equilibrium is a mixed strategy.  If my algebra is correct (and that is a big if), Joon plays Small with probability r=(p^2)/(1-p+p^2), and Franny plays Small with probability q=(1-2p+p^2)/(1-p+p^2).  (Damn you Blogger for not enabling TeX!  Or maybe it is good to disincentivize math in blog posts)

Anyway, if p=.5, this works out to r=q=1/3 (they are each more likely to bet Big) and the probability of Joon winning is 1/3, not bad for someone who is behind going into the finals.  If p=2/3, we have r=4/7, q=1/7, and the probability of Joon winning is 2/7, slightly worse.  As we move toward p=0 or 1, Franny becomes heavily favored; indeed, if p=0 or 1, she can guarantee a win by betting 0 or everything, respectively.

The gameshow determines the difficulty of FJ questions and thus has control over p.  Where do they set it?


*
One interesting detail ignored above: whether the contestants' answers are right or wrong is likely to be correlated.  A question can be easy or hard, and if Joon gets the answer right, it's more likely that Franny will get it right as well.  Interestingly, when Joon answered tonight but before Franny's answer had been revealed, the subjective (as perceived by me) probability of him winning momentarily fell even though he got it right!

Why?  If they both bet small, Joon has no chance.  So if Joon bets small, he needs Franny to bet big.  And if she bets big, Joon wins precisely when she answers incorrectly.  But the fact that he got the question right made it much more likely (in my mind) that Franny would get it right too!

Added: Here's an article on the events of the week.

Wednesday, October 5, 2011

Spot the Problem: Answer

The economonomadollars have been claimed; congratulations to T-Bone and TAllen, and I look forward to answering their questions. 

Below is my solution:

When you're evaluating the biological effect of a drug on HIV transmission, what matters is not the HIV transmission rate per 100 person-years.  What matters is the transmission rate per sexual encounter.  For someone deciding whether it's a good idea to take this drug, the question is how much it increases or decreases the probability of HIV transmission per encounter.  That is the fundamental biological property of this drug that the study appears to be targeting. [We might also be interested in the effect of drugs on the spread of disease, but they do not appear to be talking about that here].

The problem with letting rate per year stand in for rate per encounter is, it seems likely that people with a more effective contraceptive might just be having more sex.  Because the overall cost of sex is lower when you're using a drug that more effectively prevents pregnancy, right?  Indeed, even if the transmission rate per encounter is identical, they ought to be transmitting more HIV per person-year.

*
Anyway, this is a serious flaw in either the study or the reporting.  Either the study doesn't address it, or the writer doesn't. It seems like someone should be hired (not necessarily an economist, though an economist would do) to just sit around and point out obvious flaws in research that really matters a whole lot.  The next obvious question is, why isn't that happening?

If the problem is at the NYT (and yes, the NYT has a serious problem in this area, regardless of whether this particular article is an example of it), then it's not too hard to see why they don't hire someone to fix it. Most of their readers don't notice or care, and you can write more sensational articles for those customers if your standards are lower.

And if the problem is in research in other fields...well, it's easy to see why they might not want someone shooting down all their ideas, either...but now we can at least ask where the funding comes from.  Someone who's interested in solving big medical problems in Africa might want to hire a bunch of medical types to study these issues, together with some other guys to keep them in check.

Tuesday, October 4, 2011

Spot the Problem

Here's some excerpts from a recent NYT article.  What's wrong?  You should probably at least suspect it from the very first sentence.
The most popular contraceptive for women in eastern and southern Africa, a hormone shot given every three months, appears to double the risk the women will become infected with H.I.V., according to a large study published Monday.  
...In each couple, either the man or the woman was already infected with H.I.V. Researchers followed most couples for two years, had them report their contraception methods, and tracked whether the uninfected partner contracted H.I.V. from the infected partner, said Dr. Jared Baeten, an author and an epidemiologist and infectious disease specialist.
...The study found that women using hormonal contraception became infected at a rate of 6.61 per 100 person-years, compared with 3.78 for those not using that method. Transmission of H.I.V. to men occurred at a rate of 2.61 per 100 person-years for women using hormonal contraception compared with 1.51 for those who did not.
First person to answer correctly gets one economonomadollar ($1$).  Currently the $$ is not accepted across the US, but every fiat currency has to start somewhere. If nothing else, the American dollar can be used to pay your taxes, so to give the $$ some backing, let's say that I'm currently selling my answer to any question you might have, for $1$ per answer.  [As a disclaimer, please note that I said my answer, not the answer.  This ensures that I can always make good!]

Tuesday, September 27, 2011

One shoe or two?

In economics, "left shoe" and "right shoe" are an often-cited example of perfect complements.  But my friend went to India and brought back an interesting story.  He found himself in need of a new pair of flip flops, so he located a shoe stand.  The man asked him: one shoe or two?

At first he was confused: why would anyone want one shoe?  But in retrospect it's obvious!  Here we buy our shoes in pairs, and we get a new pair when the old shoes stop looking nice.  But if we didn't have that luxury, we would wear our flip flops until one of them broke, and then we would just replace the one shoe.

Thursday, September 22, 2011

Economics with time travel?

Whoa.

This is almost certainly an experimental error.  But in the meantime, why not entertain some Econ Time Travel Thoughts?

First of all, let it be known that from an economic perspective, time travel isn't inherently paradoxical.

Wait, but isn't it?  I mean, if you can travel back to before you were born and fiddle with things so that you're never born, but wait then how can you ever exist to travel back and fiddle with things, but wait then...

And doesn't time travel muddle causality in a pretty serious way?  I mean, if you're going back to yesterday to tell yourself that today you should go back to yesterday to tell yourself that today you should go back to...

A lot of people would throw up their hands, but we're economonomists so time travel doesn't scare us.  Time travel does muddle causality, but it does so in the same way that causality is always muddled in that most standard of economic concepts, the equilibrium.

When everyone in the class is cheating, who caused it?  Who is to blame?  Well why does there have to be someone to blame?  Why can't everyone just be reacting optimally (perhaps even socially optimally) to everyone else's behavior?  An equilibrium is a fixed point, where everyone's behavior causes and is caused by everyone else's.

A world with time travel can be in equilibrium too, of course.  Even if I go back to last week and deliver a message to myself, that message may be what causes me to behave precisely the way my present self is behaving.  And could I go back in time and mess with my birth, create paradoxes and whatnot?  Well, maybe the right way to think about it is that we just don't observe such out-of-equilibrium behavior.  It's not like -- as in many movies -- we exist until the moment we go back in time and mess with our past.  Rather, the whole history either is consistent with itself, or not.  If not, then what's to see?  Where time travel is concerned, you can refuse to believe in out of equilibrium behavior without being a stubborn classical economist.

I'm not saying there are many potential universes that support a time travel equilibrium, but they are certainly not a logical impossibility.  So we can safely consider the possibility of being in one ourselves, and we can safely imagine what some fun ones might look like.

*

Here's a fun one, by the way.  Has anyone seen The Time Traveler's Wife?  Spoiler alert!  Strictly from the perspective of economonomic time travel theory, it is a very good movie.  (If you're not into that stuff, you'll have to get your recommendations somewhere else!).  Most movies with time travel are, I think, of the "don't think about it too hard" variety...but this one displays, essentially, a stable time travel equilibrium.  What I most like is that it makes you check your causality judgment at the door, in the same way that earlier post about cheating is supposed to make you check your judgment at the door.

When Girl meets Guy for the first time in her life, she is like 7 and he is already in love with her, and he cultivates that relationship as she grows up, which you could say is kinda sick.  But on the other hand, when Guy meets Girl for the first time in his life, she is already in love with him and does the very same thing.  Due to the magic of time travel, there is no sense in which one of them initiated the relationship or caused it all to happen.  Hopefully you walk away from this movie recognizing that there are no fingers to point.  There is nothing here but a time travel equilibrium, a fixed point where everything is in perfect balance, whatever that balance may be.  There's a good deal of information flow to the present from the future and the past, but it always exactly enough to do exactly what it needs to do, namely to preserve the equilibrium.

Tuesday, September 20, 2011

Real Transferrable Utility

In economics, we don't normally treat utils as real things so much as a theoretical convenience to make the math easier.  It is preferences that are fundamental, we say, while a utility function just represents those preferences.  We set up u so that u(x)>u(y) whenever x is preferred to y, and then we just have to look at u to determine what the agent wants.

Indeed, this is so fundamental that within economics, "transferrable utility" really means "transferrable money."  (There are situations when you can and can't transfer money, and we refer to them as transferrable and nontransferrable utility situations).

But what if utils were an honest-to-god real thing, and what if we could trade them?  What if we could literally stick some utils in a bottle and trade them to someone else for bananas?  Although this may never be directly relevant to the real world, it is a very good exercise in economics to think through the implications.  In what ways is the transferrable utils world better than our world?  Does it solve any problems automatically that we would have to work hard to solve?  Which problems does it still not solve?  What sort of social welfare laws would we want to see in such a world?   What would a market look like?

More to come.  In the meantime, feel free to take a stab.

Monday, September 19, 2011

Qwikster

so much depends
upon

a red back
ground

streaming with
movies

beside the white
Netflix

*

William Carlos Williams penned the above poem in 1923.  It is often considered his masterwork, and as such it has been the subject of much analysis and speculation over the years.  But no one truly understood what it meant until yesterday.  Seems pretty clear, in retrospect.

Wednesday, September 7, 2011

Good news and bad news

My friend, who just got back from the dentist, imagines the contents of the voicemail he just received:

Well there's good news and bad news, sir.  The good news is, we drilled the right teeth.  The bad news is, we were supposed to drill the left teeth...

Friday, September 2, 2011

It's a feeding frenzy

We were like, "This is cool and all, but why is the music so dramatic...?"







And then we understood.

Saturday, August 27, 2011

Sevenths are awesome

Is it common knowledge that sevenths are awesome?  In my experience, no.

If your 6th grade math teacher didn't drill it into your brain, you may not know off the top of your head that 1/7th is .142857 repeating.  The cool thing is that if you did know that off the top of your head, you could also know the decimal equivalents of 2/7, 3/7, ... , 6/7 without any additional memorization.

2/7=.285714 repeating
3/7=.428571 repeating

hey...those are the same string of 6 numbers just shifted over by a few places! It continues:

4/7=.571428 repeating
5/7=.714285 repeating
6/7=.857142 repeating

And there you have it.  Somehow they're all just the same infinite string of numbers starting at slightly different positions.  Like many math facts, if you carry this around your whole life, you will find many excuses to use it, and you will at some point even have a great nerd moment where someone else actually wants to know 3000/7 and you can tell them 428.571428 repeating before they have even located their calculator.  Mathemagical.

Perhaps The Answer to the Ultimate Question of Life, The Universe, and Everything is more like 6/7 than 6*7.  In any case, why do sevenths work like this and what deeper math is at play here?  Feel free to offer any thoughts you may have.

Added 8/30: Apparently these are called cyclic numbers!  Thank you, internet.

Friday, August 19, 2011

Usury 3: Me, myself, and I(ntertemporal tradeoffs?)

In the original usury post I said that we could explain a pattern of [splurging up front, massive debt thereafter] with time preferences that strongly favor the present.  Here are a couple possibilities.

1. For starters, maybe people's preferences -- while consistent over time -- are just really impatient.  They care a lot more about the beer and chips today than their increased poverty a year from today.  There's certainly nothing wrong with this in theory.  A person's goal isn't to maximize their bank account in the long run; rather, they care about their own rate of time preference, and it could be making them better off overall to borrow at 15% today in order to consume that much more in the present.

While possible, some find it unconvincing.  Also, yawn.  It is actually written into my economonomics contract that consistency gets a verbal and written yawn.  Do you prefer option 2?  At the very least, it's actually interesting...

2. Maybe people's preferences display "time inconsistency."  Their "true" preferences are rather patient, but in the moment they cannot resist the beer and chips even though it's "really" not good for them overall.  If they could commit in advance to not borrowing so much from the credit card company, they would.

Time inconsistency is often considered irrational, but let me take a few minutes to make a relatively deep comment about the nature of preferences.

First of all, the idea that you are actually a single entity over time is at best a simplification, and at worst an actively detrimental illusion.  "Multiple selves" is not just a way of looking at things...it's reality, right?  At the level of reality, you are not the same person tomorrow!  There is some transition rule that turns your constituent atoms of today into your atoms of tomorrow, and why should they necessarily have to agree with each other?  And even if you were physically constant over time, different instances of you wouldn't have the same bodily experience from the time-dated consumption of a bag of chips (yesterday-you enjoys yesterday-chips more than today-you enjoys yesterday-chips), and different versions of yourself could easily have different preferences over when to eat chips.  Would you expect a hundred clones in a room to agree over who gets the chips?  More likely they'd all want the chips for themselves; they have the "same" preferences but over different identities, which is to say they don't really have the same preferences at all, over states of the world.

That is the default.  We can try to model things with consistency, and we can look for reasons why consistency might come to the surface, but it's important to keep in mind that consistency is not the privileged default state of the world.  It is special case if it turns out that different versions of yourself all agree with each other.

There are about 8 directions we could go from here.  We could describe some salient theoretical possibilities for the shape of our time preferences.  We could talk about empirical evidence for the actual shape of our time preferences in various settings.  Or we could go a level deeper and talk about where the preferences came from in the first place, and what sorts of preferences were likely to have arisen.  We could talk about the powerful (but not all-powerful) forces pushing our preferences in the direction of consistency, and reasons we might systematically deviate from consistency.

But right now I just want to make a simple remark. As soon as there is any disagreement whatsoever between different instances of "your" preferences, what's to say which is the "true" or "right" set of preferences?  Six years in advance you want to abstain from the beer and chips.  In six years, you don't.  What does it mean to say the advanced preferences are the "real" ones?  In some sense there are simply many agents with different preferences in conflict with each other, doing what they can to get their way.  Now perhaps most agents agree that You-2017 should not go for the chips, while You-2017 disagrees, in which case are we really making a comment about social optimality when we say You-2017 is in "error"?  And if social optimality is on the table, by all means, please tell me: what weights are we using?  How is it related to the discounting that's already taking place?  It is so very nice if preferences start out consistent, because then everyone agrees on everything...but as soon as there's disagreement, there's a big discussion to be had. (More to say, another time).

I'm happy to abstract away from this, and a single, consistent agent over time is a great fit in many situations.  But at the level of reality, I don't exactly like to think of time inconsistency as irrational. Irrationality presupposes a correct set of preferences which are being violated, but there is no correct set of preferences here, only a bunch of preferences in disagreement.

(The discussion of why a particular set of preferences may jump out and seem to be "correct"...is also for another time).

Sunday, August 14, 2011

Usury 2: Relativity?

In the previous usury post, I said that one way we could explain a pattern of [splurging up front, massive debt thereafter] is if the same consumption bundles are not offered over time.  What if people strongly prefer today's bundles to tomorrow's?  Of course, the meat of the explanation is in giving you a compelling reason to believe this may be the case.

So, here's a reason.  Suppose there are relative effects together with concavity.  That is:
  1. Utility depends on what your peers are doing.  When other people are eating better food, you get higher utility from better food yourself.
  2.  The losses from being worse off than your peers are more severe than the gains from being better off than them.  When you smell the steaks your neighbors are eating, you really want steak too.  When your neighbors aren't eating steaks, you would still like a steak but not nearly as much as when it was being dangled in front of you.
If these hold, then people care about their position relative to the pack, and in particular it's better to stick with the pack than to spend some time below and some time above it.  When your friends are splurging backed by 15% interest credit card debt, you feel like you need to as well.  When you crash later, at least they will be crashing too.  So, we can get stuck in an equilibrium where, given that everyone's taking on massive debt, it's optimal for each individual to take on massive debt.

So when I say that different consumption bundles are offered over time, I don't mean that the physical goods are any different over time.  I'm imagining social effects as part of the consumption bundle, and that's what's changing.

*

A couple of comments.  First, I like this explanation but I do think of it as a partial explanation.  It doesn't explain why this equilibrium was selected in the first place, but it goes a long way to explaining the extent of people's willingness to do what other people are doing.  At the very least it serves to magnify tendencies that may have their roots in other explanations.

For example, perhaps credit card companies pulled one over on people, maybe they managed to actually trick them.  I'm  not going to reject this explanation per se, but a takeaway from today is that there can be much less of this than you would require if it were the whole story.  We don't need everyone to be deluded, to end up in an equilibrium where everyone is taking bad deals.

Second, I think the above is a nice story for understanding status quo effects more generally.  Especially when it's costly to gather the info necessary to find the best course of action, "do what everyone else is doing" is a particularly cheap and safe shortcut.  If they're right, great.  If they're wrong, at least you all go down together.

I'm going to be loose here, but note that without the concavity and relative effects, people would observe the crowd's opinion, and then perhaps do better by putting even a slight amount of their own effort into further information-gathering.  (With everyone doing this, the crowd could actually get pretty smart).  But here, you have to put a serious amount of effort forward in order to get enough additional info to make deviating from the crowd worthwhile at all.  (So collective ignorance is supported as an equilibrium).

I expect that:

  • Most people model most of their behavior off the status quo among their peer group, while selecting a few areas to focus on much more intently, as opposed to spreading their attention across all areas.
  • Status quo sunscreen usage doesn't necessarily reflect the available information together with how much people care about health risks per se, ignoring social effects.  Maybe we're just in a bad equilibrium.

Friday, August 12, 2011

Does usury exist?

Dan Lemire writes:
What is usury? Lending money knowing that it will make people poorer... 
Economists who expect people to be rational won’t believe in usury. Surely, people borrow money to be better off. Yet the average credit card debt per household is over $15,000 in the USA (and it grows all the time, of course). Does anyone believe that these $15,000 are invested in highly profitable ventures? (They would need to be highly profitable since credit cards often charge over 15% for loans.) 
...can anyone convince me that credit card companies produce wealth by charging exorbitant interests to people who use the money to buy beer and chips? 
First, as a pure matter of tone, I'm going to interpret "economists who expect people to be rational" not as "economists, who expect people to be rational," but rather as "the subset of economists who expect real people to actually be rational."  And by way of preterition, I'm not going to make any comment about the difference between a useful model and actually believing something literally about the real world, or about the variously constraining definitions of rationality and whether the aforementioned subset is even nonempty under the definition that noneconomists typically have in mind when they talk about economic rationality.  Instead, we will charge ahead and leave the question of whether economists are delusional fools to another day.

Now, accepting this definition of usury, the first step is to be careful about what it means to make someone "poorer."  Dan is taking "poorer" to mean something like "lower present value of wealth with respect to whatever interest rate," so that borrowing at 15% makes you poorer unless you're taking that borrowed money and investing it in another project with an even higher return.  But if that's poorer, then forget borrowing at 15% --- consuming makes you poorer!  It's fine to use "poorer" in this way, but then you don't get to equate "poorer" with "worse off."  A rational person's allegiance is to making himself better off, not maximizing his bank account!  (Rational people consume, of course).

By contrast, economists might take "poorer" to directly mean "worse off," i.e. having access to less desirable consumption bundles.  But then borrowing at 15% doesn't necessarily make you poorer.  It gives you access to fewer bundles tomorrow, but more today.  If you want the extra consumption today more than the foregone consumption tomorrow, you are not poorer for borrowing to enable that consumption pattern.

Upcoming, a few ways to complete the story. Perhaps -- for a special reason I will soon make clear -- the same consumption bundles are not offered over time, in which case people could simply prefer the consumption bundles offered today to the ones offered tomorrow.  Or perhaps the bundles are constant over time but there is something going on with their time preferences that makes them prefer to consume a lot up front.

Friday, August 5, 2011

Tuesday, August 2, 2011

Relationship advice for economists: Don't do it with models.

Economics has plenty to say about relationships, both in terms of individual behavior and how it aggregates. Today I want to frame the individual's problem as a peculiar sort of search problem.

For today, take as given that each individual has a bunch of characteristics, which make different people better or worse matches for each other. For instance, you may prefer someone who likes football as much as you, or someone who shares a lot of your beliefs.

The objective, loosely, is to spend as long as possible with as good a match as possible. But there is incomplete information. You don't know anyone else's vector of characteristics, and you have to meet and get to know them to find out. Some characteristics are uncovered quickly (e.g. physical appearance), others take weeks, months, or years to discover ("just how much can I trust this person?").

So you set about to find a good match, repeatedly sampling from a pool of candidates. You're dating someone, all the while learning more about them and their compatibility with you. On any day you can either opt to continue the relationship another day, or reject your partner and move onto the next possibility. In this simple example, the expected value of rejecting/starting anew is constant, so as usual the problem gives rise to a rejection threshold: if you should ever learn enough bad information that the expected value of continuing the relationship drops below the expected value of moving on, you move on.

Now here's a twist. All through your life, you continue to meet people and learn things about them, whether in a relationship or not. But if you are in a relationship, there's a cap on how much you're allowed to learn about anyone else. Conversely, there's a cap (not necessarily the same) on how much you are able to learn about someone else (in terms of relationship compatibility) without being in a relationship with them.

Together, these say that to find out whether someone will be a good enough match for you, you will eventually have to enter into a relationship, and once in a relationship, your outside search behavior is critically limited. Also, we are now imagining a model where the expected value of moving on is not constant, but rather depends on your current best candidate outside of the relationship, which is determined by what you've learned about them. (You can think of this as the best candidate who is willing to be in a relationship with you).

Excuse me while I make up some terminology. Say that a relationship is t-stable if in period t, neither party prefers to match with their best candidate outside of the relationship. That is, a relationship continues to the next period precisely if it is t-stable. Furthermore, say that a relationship is t-superstable if it would be t-stable for any possible best outside candidate. That is, given each partner's period-t expected value of the relationship, there's no possible outside option that could entice them away from their current relationship.

(The t is there to remind us that this really is a dynamic system that unfolds over time; superstability is a function of the information you currently know.  You could have a really great first date and be superstable in the first period of the relationship, but lose it later on as more characteristics become known.  And you may never know whether your relationship will last into old age; people change in ways they can't entirely predict. Perhaps it takes being 50 to know that you'll be compatible as 50-year-olds.   Keep it in mind, but for the rest of the post I'm just going to drop the t.)

And let me be clear that superstability does not mean you're matched with your "soul mate," your best possible match in the universe.  It means that out-of-relationship signals are simply not informative enough to compete with all the great things you know for sure about the person you're with.

To be sure, whether we should expect to find superstable matches depends on the particular social rules that govern the sort of search people are allowed/able to do inside and outside of what society has defined as a "relationship."  But I think the answer is generally yes.  Compatibility depends on so many important things that are impossible to observe outside of a relationship; attractiveness and surface personality and so forth are ultimately not a large enough part of the quality of a match. After enough searching, you may find yourself in a particular relationship and discover that it's a really great match, way better than you expected beforehand, way better than you could ever expect beforehand, from anyone. And then you will be done.

I know this is a long post, but I don't like to split these things up into disjoint pieces.  Read on if you want to see what else falls out of the model, now that we have it.

Observations, questions, extensions



Which sorts of characteristics are really important for match quality?  I haven't said anything about how characteristics get converted into match quality, but presumably the function is not simple.  It takes the two vectors of characteristics and combines them in a potentially complicated way.  In reality, perhaps we observe the match quality directly, but not the function that produced it from the characteristics.  We can theorize about the relationship between characteristics and quality, but how good should we expect these theories to be?

If we know that certain characteristics lead to higher match quality, we can put a lot of effort into surrounding ourselves with people who have those characteristics.  We can apply to schools or jobs that attract like-minded people, and let the process do a lot of screening for us.  On the other hand, if we don't actually have a good idea of where match quality comes from, then our attempts to screen for preliminary characteristics are largely a waste of effort.  If we mostly need to be in a relationship to find out the quality of the relationship, then we should try to have more relationships, not "better" ones.


*

I glossed over behavior out of a relationship above, but the model actually doesn't rule out it being optimal to spend time out of any relationship, even if there are currently people willing to match with you. That comes from keeping the caps separate, as I said:
If you are in a relationship, there's a cap on how much you're allowed to learn about anyone else. Conversely, there's a cap (not necessarily the same, presumably higher) on how much you are able to learn about someone else without being in a relationship with them.
By becoming close friends with someone while single, perhaps you can get relatively far into a relationship while still being allowed to search over other people. Once you are in a relationship, there may be a hard bound on how much time you can spend with other people you might be interested in. But out of a relationship, it may be that you can get pretty close to multiple people, without "dating" and thereby activating the restriction on how you spend your time outside of that particular relationship. Not necessarily a bad idea. Yes, dating can irrevocably destroy a friendship and all the time that went into it. But on the flip side, the time you spend in an ultimately failed relationship is more costly per day, since it more seriously restricts your search behavior. So the more you know about someone prior to starting a relationship with them, the less you have to figure out when time is being billed at a higher rate.

Another reason to stay (or become) single that's not captured by this model is this: when you're single, you are significantly more likely to discover who else is interested in you. Here we're assuming this is just known, but clearly it's private information in real life...info which people are reluctant to give up, especially if you're spoken for.


*

In a match, quality along easily observable characteristics versus less observable characteristics says something about how the match is likely to have been initiated. For example, if a really high-quality person is matched with someone who's low-quality along the most visible dimensions, it is more likely that they knew each other for long enough to uncover some of the less visible dimensions before dating. If you feel your most easily-observable characteristics don't adequately represent who you really are, you may have much better luck cultivating friendships than asking people out on the fly.

*

And if you do ask someone out on the fly, their expected unobservable quality could even be decreasing in their attractiveness, conditional on them saying yes. Lower quality people are more willing to say yes, so if they're willing and attractive, it could mean that they are deficient in less observable categories. Similarly, if you know your friend is really high-quality, and her boyfriend seems "not good enough" when you first meet him, then you might actually conclude that he's better than expected along the deeper dimensions you can't observe in one interaction. Better, perhaps, than you'd expect a really handsome guy to be.  (But I don't know how far it makes sense to push this line of counterintuitive reasoning.  Maybe your friend is just deluded.)

Empirically, we may find that relationship length is decreasing in attractiveness of the other party, over the range of time during which a lot of hidden characteristics are discovered. Hmm. (Perhaps economists shouldn't do it with models after all).

*
Note that these effects are dampened along dimensions where there is widespread disagreement about "quality."  It could be interesting to think through how more and less observable characteristics break down into matters of personal taste versus widespread agreement over quality. 

*

Finally, a note on this search model vis-a-vis the matching literature. The Gale-Shapley Deferred Acceptance algorithm is beautiful and simple, but it assumes complete information. How might we adapt the notion of a stable matching (across all of society) to our environment?

Well, we could say a t-stable matching is a matching where every individual relationship is t-stable. But it seems to lose its edge, no? As soon as we introduce incomplete information and a cost of searching (here it takes time), the space of t-stable matchings gets very large, and there's no reason for any such "stable" matching to survive for any length of time. On the other hand, a "t-superstable matching" is much closer to the Gale-Shapley notion of "stable matching." But in general there's no reason to expect a t-superstable matching to exist, so we might want to think about conditions that would ensure it.  I suspect they would be far too restrictive to be realistic though.  Where would we take this?  On the one hand, incomplete information is nice in that it potentially supports a lot of matchings that we see in the real world, which would not survive under complete information.  But on the other hand, that also means the theory isn't going to give any sharp predictions.  And the matchings are unlikely to be stable over time, because there is constantly new information flowing, which changes people's decisions.  I suppose superstability could potentially be a decent solution concept for the subset of people who are married, but this isn't without its problems either.

Put a bunch of people in a room. Give each person one observable characteristic, and one characteristic that can only be discovered by spending a period with them. What happens?

Bayesian approaches to matching have hardly been touched.

Friday, July 29, 2011

Determination

This post from Alex Tabarrok caught my eye over at MR:

Determinists argue that fault and blame have no place in criminal “justice”. Neuroscientist David Eagleman, for example, made this argument recently in The Atlantic:
The crux of the problem is that it no longer makes sense to ask, “To what extent was it his biology, and to what extent was it him?,” because we now understand that there is no meaningful distinction between a person’s biology and his decision-making. They are inseparable.
While our current style of punishment rests on a bedrock of personal volition and blame, our modern understanding of the brain suggests a different approach. Blameworthiness should be removed from the legal argot...
Eagleman and other determinists are against punishment but they recognize that incarceration still has a role to play because the public has a right to be safe. Philosopher Saul Smilansky now pounces with a timely paper on determinism and punishment.
It is surely wrong to punish people for something that is not their fault or under their control. (Hard determinists agree with this premise.) 
But incarceration is a type of punishment so under the hard determinist view, justice requires that when we incarcerate criminals we must also compensate them to make up for the unjust punishment...[which] however, is very likely to cause a big increase in crime and that is also unjust.
  
[bold added by me]

The problem is clearly that whoever is defining justice has tried to make it too many contradictory things.  Which has nothing to do with hard determinism itself, i.e. the belief that the entire universe is a big mechanical clock.

This "surely wrong" statement would seem to rest on the assumption that it is inherently wrong to punish an individual who is not morally guilty.  But since when is punishment only for the bad?  Punishment does more than just exact justice; it also incentivizes good behavior.

I think it's a big mistake to define justice with respect to the ex post realizations of random variables, as opposed to the ex ante gambles themselves.  It is perhaps tempting to say things like, "The individual has a right to not be punished for actions that are not morally wrong," or, "It is unjust to punish people for actions that aren't morally wrong."  But in reality, the universe is a fundamentally uncertain place and we are often willing to take gambles that sometimes lead to bad outcomes like punishment.  In the Rawlsian sense, before we knew what side we'd be on, we might want the package deal where people who do socially undesirable things are punished, because it incentivizes good behavior, even though we will sometimes get picked up in the net ourselves.  If we opt into that society, what's surely wrong with it?

By default, there is uncertainty in everything that happens to an individual.  And when society gets involved, it actually does an extraordinary job of mitigating that uncertainty.  (With money, we can effectively store enough food to feed ourselves for the rest of our lives.  And isn't it nice to not be randomly attacked by wolves all the time, or for that matter the clan next door?).  But society cannot and probably should not mitigate all the uncertainty in life, because that creates a massive incentive problem!  The best gamble, in the sense that we would most prefer it beforehand, is likely a nontrivial one that involves some punishment...regardless of whether there is such a thing as moral culpability.

No, if you buy hard determinism, the real difficulty is not with the formal legal system.  The real difficulty is Rose and Colin's problem, which revolves around social norms.  If society doesn't think individuals are culpable, it cannot readily punish them with disapproval, and disapproval is a very large part of what keeps people in line.  Should blameworthiness be "removed from the legal argot," as Eagleman suggests?  To the contrary, by all means keep it around if you can!  The problem is when it gets forcibly stripped from you, because you realize it shouldn't technically exist.

Determinism, however true, is not necessarily a good philosophy to spread.  Therefore let me assure you that I definitely do not believe in that stuff.  Determinism, ha ha!!1!  Please!

Wednesday, July 20, 2011

Haikunomic 07: Sordid Lynx



 “Wait, what? Jeff Ely
is reading this as we speak?
How could you know that?”

“Simple.  We can just
condition this on the fact
that he’s reading it!”

“Ah, I think I see.
The conversation only
happens when he reads?”

“Well technically there’s
a bunch of confused readers
who are not named Jeff.” 

“Huh. Maybe they could
pretend to be Jeff? Would that
make the haiku work?”

“No, it would still be
common knowledge, I think, that
they’re not the real Jeff.”

“Yeah, we’d know it was
a lie…so, what’s it take to
pass for Jeff Ely?

Uncommon knowledge.
Like, you need to know about
charcoal bags and stuff.”

“And you gotta know
that I know that you know that…
He likes that stuff, man.”

“Yeah, true.  Anyway,
I decided I don’t care
about the non-Jeffs.”

“Hey, can we stop talking in haiku?  I mean, isn’t it kinda Cheap, calling it a haiku but just going on and on anyway?”

“OK fine.  Yeah but anyway, so I decided that today I just don’t care about everyone who’s not Jeff.  So Jeff is reading this right now, and if you’re not Jeff, I’m sorry but I set your lambda to zero so your opinion just doesn’t matter.”

“Sneaky.  Kind of mean, tho.”

“I’ll make it up to them later.  Anyway, so Jeff Ely is reading this right now.”

“I think we’re on the same page.  Well in that case you should probably hurry up and say something smart.”

“Why would I want to do that?  Don't you know that anything I say, my reader will think?  But Jeff already has his own smart thoughts and all, that would be a waste of my powers.”

“Anything you say, he’ll think?”

“Yeah, watch: 'I am a sordid lynx,’ he thinks.  See, he just thought that.  He read it, so he thought it.”

“I see.  Well if he thinks so himself, it must be true.”

*


By reader request, Jeff Ely gets the treatment today.  We were happy to fill this request; we appreciate Cheap Talk around these parts, even though we are poor grad students with no grill and therefore no bags of charcoal in need of opening.  Furthermore, according to a rough back of the envelope calculation, I would have to be paid $0 per word in this haiku to make my talk as cheap as Jeff's; this compares favorably to Mankiw from last time.  


If you are just tuning in, you can access past and future haikunomics here.  It's probably a good idea, but I can't be sure; although I know you're reading right now, I don't know when right now is, and I certainly can't make any promises about future haikunomics.  

To be honest it's probably all downhill from here.

Sunday, July 17, 2011

Cat call

Now accepting suggestions for who to feature on the next and upcoming haikunomics.  They should probably be economists (bloggers or not), but feel free to submit any nominations you like (multiple is fine), either in the comments or by email.  

Well, okay.  If you give me the name of your pet cat, you probably will end up with a haiku about the economics of cats at some point.  But consider that a lower bound on what's reasonable...for now.

Saturday, July 16, 2011

Can't you see that rich people drive fancy cars?

I once knew of a guy who was obsessed with the fact that he went to Yale.  Indeed, I was warned that his whole family was obsessed.  Then I met his mom at a party and sure enough, she somehow managed to work the fact that he went to Yale into the very first sentence she spoke to me.  (It may have been a run-on sentence, but even so...)

After you run into enough of these people, it's tempting to conclude that Ivy League graduates are a pretentious lot.  But of course there's a serious selection problem here.  Because, there are plenty of Yalies who do not go around announcing their Yaliness, but you don't know they're Yalies.  Your mental sample of Yalies is disproportionately full of the pretentious ones who go out of their way to declare themselves part of that group.

And so we walk around systematically thinking that Yale graduates on average are more full of themselves than they actually are.  (Fight this).

More generally, for most anything which correlates with higher status, we systematically tend to overestimate the degree to which people are pursuing it for the sake of status alone.  Making more money boosts both absolute welfare and relative status, but we are more likely to notice the people who hold their wealth over us, not the ones who blend in quietly.  On average, the rich are probably less obsessed with their riches than we imagine.

Updated 7/20: The last sentence should really have read, "On average, the rich are probably less obsessed with their richness than we imagine."  (By which I would have meant relative richness, i.e. being in the class of people called rich, rather than the absolute level of one's wealth).  Thanks to a clarifying comment from Tony.

Friday, July 15, 2011

I'm angry so I'm not angry

Rose and Colin are in a relationship.  Because they are good game theorists, they think of it as a repeated game.  Rose is in charge of the rows, of course, and Colin has the columns.  (That's just how game theorists roll).

In particular, it's Colin's job to take out the trash every day.  Now, Rose understands that she can incentivize good behavior by punishing him sufficiently for any deviations from doing his job.  But let's say there's some essential noise.  The probability of remembering to take out the trash on any given day is not 100%. Colin could put more effort into remembering, but it would be prohibitively costly to actually remember all the time.  In fact it is socially optimal for him to just remember most of the time.  Rose knows this, and she sets up her punishments to incentivize the optimal probability of taking out the trash.

Now, here is the first odd thing that comes with being game theorists, with full awareness of what's going on here.  Rose knows she is incentivizing optimal behavior.  When Colin forgets to take out the trash, that's actually optimal behavior.  But Rose must still punish him, even though they both know he's done nothing wrong, because that's what keeps Colin from doing something wrong in the future.  

Let me be clear that because these are game theorists -- or just rational people more generally -- Rose and Colin understand that behavior should be evaluated for optimality not based on what actually happens after the fact, but rather on what was expected to happen beforehand.  If Colin accepts a bet on a coinflip, heads +$10, tails -$5, Rose will be happy with him.  To get angry at him when he happens to lose makes no sense, because the actual realization of the coin flip is not tied to Colin's behavior in any way.

So in this household, Rose (bless her heart) is not actually angry with Colin when he forgets to take out the trash.  She is not bothered by a bad realization of a random variable, so long as he is drawing from the right urn.  She is not angry, not even for a moment, when he drops a glass.  It happens to the best of us...the probability of breakage will always be nonzero.  Sometimes the situation just sucks, she is fond of saying.

But here's the second odd thing that comes out.  I said before that Rose "punishes" Colin when he forgets the trash.  But I didn't say how.  Even for economonomic types, there isn't lots of money and goods flying back and forth in the household.  More likely, if Colin does something bad, he is punished with Disapproval.  He likes Rose, you see, and he doesn't like it when Rose is upset with him.

And so we come to it.  Do you see the problem here?  Rose is supposed to punish Colin with anger, even though she isn't actually angry with him because she knows she is incentivizing ex ante good behavior, and the ex ante criterion is the correct way to evaluate behavior. (absent incomplete information, which we are ignoring today).

This is a huge problem for Rose and Colin.  Now, if Colin didn't really know what was up, possibly Rose could pretend to be angry. But since they're both game theorists, it is common knowledge that she isn't really angry.  Uh-oh.

But wait...so what actually happens?  I mean, if Rose can't credibly threaten to punish Colin, then he won't actually behave the way she wants him to, and then she'll actually be angry with him, but if she's going to actually be angry with him, then he'll do what he's supposed to, but then...

That's something for you to chew on.


*****************************
For Further Thinking
*****************************

So emotion-based punishments only work to the extent that your emotions are correlated with the states of the world in which you'd like to punish.  Being "in control" of your emotions normally means being able to damp them down, but what about amping them up?  It could be a useful ability, and there's information you can learn that will do its best to kill it.  Installment #643 of Information Is Not Always Good.

*

I do want to be explicit that there's nothing fundamentally inconsistent about having preferences over the state of the world, and therefore emotions that are tied to the realizations of random variables.  But how angry can Rose be about the simple fact that the trash is inside versus outside, versus the fact that Colin failed to take it out even though he was supposed to?  It's the latter that she's supposed to use as a lever, and it's the latter that she cannot use as a lever.  Furthermore, while we might like to believe that it should be "enough" to make her preferences over states of the world known to Colin, who cares about her, the reality is that they will always have different preferences even once they take each other's preferences into account, and there will always be bargaining going on in the relationship, to trade off between them.

*

So long as we're here, it goes even farther.  Picking the right urn is itself a meta-process, and on some level we're all just trying to do our best in this world, according to preferences we're all just as entitled to as anyone else.  I don't want to get too meta today, but I actually have a hard time getting angry about any behavior at all.  (Nor am I good at faking it).  This is mostly a good thing, I think, but it isn't without its drawbacks.  And while I think it's beneficial to me overall, it's not at all clear that it would be socially optimal for everyone to be like that.  Social approval and disapproval is an enormous part of what holds our society together.  I'll leave it at that for now.

Wednesday, July 13, 2011

Amazon Amazon

Did you know that when you google Amazon...well Amazon.com is the first hit, unsurprisingly. But above that you will likely see an ad.  For Amazon.  Leading to the same place.

This is also true for many (but not all) of the other commercial entities I tried out. Somehow I never noticed this before now, most likely because when do I ever look at those ads in the first place?

Theories as to why they're paying for this ad?

And we're back!

It's been quiet around here lately.  I've been up to a lot of things, most recently studying for my prelim exam in mathematical economics.  But that's behind me now, which means a couple things:  
  1. Back to blogging!
  2. No more exams ever again! (assuming I passed, of course)
We will shortly return to our usual programming.

Thursday, June 30, 2011

Eternal youth

The opening words from a Robson and Kaplan 2007 paper I happened upon:
One way to illustrate the effect of aging on longevity is to calculate the life expectancy of nine year olds if they could sustain their current mortality rate.  For the U.S. population in 2003, this life expectancy would be just over 7,000 years. (See United States Centers for Disease Control, 2006.) That is, we would not be immortal, because there is still a constant positive probability of dying; but our lives would be vastly longer if mortality risks did not increase with age. In fact, 2% of the population of nine year olds would live to almost 30,000 years of age. 
Haven't read the rest of the paper but I thought it was an interesting tidbit.

Tuesday, June 28, 2011

Why not optional exile?

[Posting will continue to be light in the near future]

The point of a prison is twofold. For one, prison is no fun, so the threat of imprisonment is a deterrent against crime. For another, locking criminals up keeps them from committing additional crimes.

The thing is, imprisonment is expensive. By contrast, exile sort of accomplishes both of the above effects -- it's less fun than not exile and it keeps prisoners from committing additional crimes against people in our society -- but we don't have to pay for our prisoners. It's sort of a jerk move towards the rest of the world, but (a) that hardly implies Americans wouldn't do it, and (b) the exile need not be unilateral.

Exile could simply be used as a sentence.  Or the prisoner could even be given a choice: optional exile, or prison term -- so it's not unfair to them.  Alternatively, we could just make escape easier for people awaiting likely imprisonment. (allow bail, don't nab them at the airport, etc).  But instead we usually get bummed out when a criminal manages to skip town. (Darn!  We almost had to deal with this guy for the rest of his life!)

Exile need not be indefinite; like prison sentences, a term can be specified.  Amnesty can even be granted for those who have escaped and been gone for a long time.  Exile could be a decent substitute for shorter-term prison sentences (they can't be so severe that exile isn't a comparable deterrent). It is obviously not the answer for everything, but I'm surprised it's not more popular, especially on those occasions when the prisons get so overcrowded that the Supreme Court orders a mass release of inmates (into not-exile).

Sort of curious why I never hear talk of this.  (Or maybe I've just missed it).