Wednesday, March 30, 2011

Therefore judge not their grasp of reality

When a person's pronouncements have a serious impact on how others behave, there may be good reason to be strategic about those pronouncements. We shouldn't expect them to be entirely honest.

When the Fed chairman comes out with a prediction about where the economy is headed, it has a serious impact on the economy...so he's not necessarily going to speak his mind. If the "prediction" later turns out to be wrong, we don't really get to turn around and call him a fool for believing what he believed, because we don't know what he really believed. We only know what he claimed to believe, and maybe what he said really was the best thing for the economy.

On the other hand, when there's a large number of people coming to a consensus -- so that no one person has a significant effect on what is believed in the aggregate -- then they may freely speak their minds. Absent other motives, we can be more sure that this consensus accurately reflects true beliefs.

So consider the difference between, say, a Democratic president on the one hand, and the Democratic party on the other. Or, the difference between a Democrat after he becomes president, versus before when he was just a member of the Democratic party (perhaps a congressman). Is there something predictable about the evolution of his stated beliefs? Some component that we can argue is due specifically to the increased impact his stated beliefs have on the world?

Political pronouncements are of course muddled by other sorts of motives, but that doesn't mean we can't say something interesting. It would also be hard to disentangle the fact that the president learns a lot of secret information that he did not have before, and which he cannot necessarily share with the public. What if when you become president, they sit you down in a room and expose you to a secret but compelling argument for why it's really sensible to start some war you never would have considered before? Perhaps we can never know what the president really believes, or whether we'd do the same thing if we were in his position, with the same information.

Haikunomic 05: A Haiku that Derek Neal would Hate

A lovely guest haikunomic from Tony:

Econometrics
Hey, what a nice set of tools
The world is my nail.

*
I'm no econometrician, but as I am TAing for Tony's Honors Economometrics class right now, we may also be talking about some economometrics on this blog in the near future. As Tony reports,
econometrics
is economometrics
done without econ

Saturday, March 26, 2011

Stealing

At this point I think I can just say that if everyone is stealing $100 worth of office supplies, it's really better thought of as part of their wage, or like any benefit. Possibly better than $100 more in wages (hey, nontaxable!), although possibly worse if they value the office supplies at less than it actually costs the company to buy them.

By the way, unenforced rules against theft can be an effective method of price discrimination. In high school, many of my friends used to ask for water cups from Chipotle and fill them with soda; in theory Chipotle could even be happy with such behavior if the only people willing to steal in such a manner are those with a relatively low willingness to pay (such as teenagers). (I'm not suggesting that's actually what was going on). As another example, we could set up a socially conscious bread stand that was easy to steal loaves from, with the understanding that only desperately starving people would be willing to steal to eat, and we actually want them to. Depending on the norms for stealing versus lying, this could discriminate between customers more effectively than a sign saying, "Free bread if you're starving." Indeed, even in the absence of such norms, note that there's a risk of being caught and punished for stealing (unlike lying); a bread stand that secretly chose not to pursue enforcement would still be taking advantage of the threat's ability to separate consumers, effectively giving away bread only to those people who are so desperate as to make the risk worthwhile.

Friday, March 25, 2011

Haikunomic 04: Menu Costs?

So many waiters
I mean, waiting customers.

*
Or is the bisque just too cheap at 6:30 on a Friday night?

Tuesday, March 22, 2011

Cheating: a side note

These are problems with the instructor or classroom environment, not necessarily the student. In this model of the world, students are just trying to do the best they can, given the environment they face. If the pressures are too great, the communication and enforcement of the rules too weak, the long-term objectives too muddled and the motivation for the subject is too little, who can blame the student for cheating?
Yeah, it feels to me like the social rhetoric surrounding cheating makes it easy to pass too much of the blame onto the students, who -- for the reasons we've been discussing -- aren't necessarily in the best position to prevent it. If we replaced (or supplemented) "How could you" with "how could I let you"...maybe a lot less cheating would occur. The best classroom environment is one in which people can't cheat, or in which they will surely be caught and therefore don't cheat, or in which the penalty of cheating is so high that nobody dare cheat. (There's a tension here though: You have to be willing to punish people who cheat, even if you privately realize that, in truth, cheating makes its own existence morally ambiguous).

So maybe we should be pushing harder on teachers. But if so, how? How do you incentivize a teacher to catch cheating? The trouble is, the teacher is also the one who monitors cheating. How can you punish them for failing to catch something when they're the only one who knows whether there was something to catch or not? Who watches the watchman? It is an interesting problem, and not without solutions. Let's plant students to cheat every so often, and punish the teacher for failing to catch them!

Monday, March 21, 2011

Cheating, part 2: "The situation just sucks"

In Cheating, I gave an example where it is socially optimal to cheat, given that others are cheating. The idea of the example is that when only relative ranking matters, everyone's cheating cancels out, preserving the original, socially optimal (i.e. true) ranking.

Let's tweak this a tad by saying that absolute performance matters too, but just a little. In fact, let's say that our top priority is getting the ranking right, but conditional on getting the right ranking, we also would prefer that the test scores accurately measure a student's true ability. In this case, it is no longer socially optimal for everyone to cheat; the ranking would be the same, but their abilities would be inflated.

On the other hand, it's still socially optimal for any individual to cheat, given that others are cheating. And here we come to one of the great subtleties of economics. Even though social welfare is not maximized by everyone cheating, each person is behaving socially optimally when everyone cheats!

What's going on here? Somehow, putting decisions into the hands of lots of individual actors changes the game, even when those individuals still have the same goal as the society they make up. It works like this:
  1. As an individual, if your goal is social welfare, then certainly you should choose the course of action that maximizes social welfare, taking as given all the things about the world that you can't change.
  2. Since no student has any say in the behavior of the other students, all a student can do is decide how to personally behave, taking other students' behavior as given.
  3. When everyone else is cheating, it's socially optimal for an individual to cheat, since that's the only way to preserve the true ranking of students, which is the most important thing.
  4. So when everyone is cheating, each cheater is behaving socially optimally!
As the example demonstrates, socially optimal behavior given the behavior of others is simply not the same as jointly socially optimal behavior, even when everyone is doing it.

I call this one of the great subtleties of economics because most noneconomists expect the behavior of a group of like-minded people to accurately reflect the underlying motives of those people. To be sure, this is eminently reasonable...I mean, if we all want the same thing, shouldn't we just be doing it? Without being tipped off, you could hardly be expected to realize there's even something worth considering. But in this case, the obvious thing is wrong. "Everyone cheat" is an equilibrium of a game in which everyone truly has good intentions. It's a bad equilibrium, in the sense that everyone would be happier in the "No cheat" equilibrium...but even good people can get stuck there. If everyone's planning to cheat because they think everyone else is planning to cheat, then who's to blame? It's not as if they're under the wrong impression or something; to the contrary, each person is right about everyone else, and they're doing the right thing in response! Sometimes the situation just sucks.

It's kind of important to understand the concept of a situation just sucking, without there necessarily being someone to blame, someone who caused it. To most people, when a horrible financial crisis happens, it must be that some bad people were behind it. When everyone in the bottom decile is cheating, it must be that they're bad people. But actually, it's easy to get this sort of behavior out of perfectly reasonable people. We've seen that even in the case where everyone literally wants social optimality, they can still get stuck in a bad equilibrium; relax this so that people are (reasonably!) allowed to care a little more their own wellbeing, and it's cake to construct realistic examples where a bunch of decent people collectively do a bad thing.

*

This particular brand of misguided moralizing has so many faces. Athletes and their steroids. Lawyers and their...lawyering? Should we be angry at politicians for using any means necessary to get their way, no matter how ridiculous? Or could it be that in politics, weaseling is the socially optimal response to weasels?

When in Rome, my friends. Even if they're cheaters.

Friday, March 18, 2011

Cheating

Having tackled lying here, here, and here, the obvious next target is cheating, with stealing to come.

By cheating, I mean breaking a rule to one's advantage, be it peeking at someone else's exam or hand of cards. Cheating is supposed to be fundamentally wrong, whether in the classroom or the card game.

As an example, suppose you're taking an exam that will be strictly curved to 10 A's, 10 B's, 10 C's. Then the sole purpose of the exam is to rank the students relative to each other. Why is cheating a no-no? Because it could potentially disrupt the true ranking, subverting the entire point of the exam. Fair enough.

Of course, if everyone else is cheating...

If you've been following along previously, you probably know where this is going. To make the point cleanly, suppose you have been told in advance that for question #3, you will have to write out a randomly chosen line of some particular long poem, from memory. It's more trouble than it's worth, and you know that everyone else is just sneaking a copy of the entire poem into the test. The teacher is unobservant and certainly won't catch anyone. Do you follow suit, or forfeit the question? Obviously sneaking in the poem is cheating...but if everyone else is doing it, then actually you preserve the ranking by cheating too. Which is a good thing, right?

Following the rules instead of the objective that generated them does not an angel make, although in this case it might make an unfortunate martyr. In this case, the real point of the exam is to accurately rank students, not to have students diligently follow rules governing their conduct.

Now let me be clear that while most cheating situations are far more complicated than the above, they nevertheless often contain the effect I've just identified as a component, which means you cannot disappear this problem by adding on layers of complexity. Sure, cheating is typically not a binary decision -- how much to cheat? And when only some people are cheating, or cheating by different amounts, then cheating yourself corrects for the imbalance between you and the cheaters, while potentially creating an imbalance between you and the non-cheaters. What to do then? OMG moral ambiguity! It can get messy, but the answer is not that we get to ignore the above effect just because it is no longer clearly dominates. Sometimes we have to take a stand; the answer will depend a lot on the particulars of the situation, and the objective chosen -- yes, a judgment call.

Take your "never cheat" if you want. If you're after simple rules instead of judgment calls, you might also consider only flinching when the brick is definitely positively going to hit you in the face.

*

"That's all fine and well as a point of theory, but as a practical matter, when have I ever actually been in a situation where cheating on an exam was the right thing to do?"

Yeah, I haven't gone through life righteously cheating on exams either. But maybe it has more to do with the situations I've been in, and not so much a fundamental drive to do what society says is the "honorable" thing. Maybe if I was in the bottom decile in high school, competing with the people around me to not be the one who fails, and everyone else in my decile was cheating, and the risk of getting caught wasn't too high...well it could easily be not just selfishly optimal, but in fact even socially optimal for me to cheat. Is academic honesty about fairness? What could possibly feel less fair than me failing because everyone around me cheated?

Therefore do not be so quick to assume that in practice cheating is pretty much always wrong just because it has been for you, and do not be so eager to judge the cheating of people who have found themselves in very different situations from yours.

Do you feel differently? Do you feel like you would never cheat? Should never cheat? Something to chew on.

*

NB: I can think of two important objections to this and the Rules versus objectives post more generally. First, as a practical matter, simple rules can perhaps be ingrained into behavior more readily than more complicated ones, and may actually be a more effective channel to align individual behavior with socially optimal behavior. The other objection has to do with equilibrium selection in what is really a repeated game. These are topics for another time though, and are more caveats than contradictions to the above content.

Thursday, March 17, 2011

Scenic Moo

A semi-loyal reader reports: "i would start a blog but i dont have any good blog names"

Unacceptable! Unfortunately the best name is already taken, but do not despair. Here are some fresh ones that you may feel free to use:
  1. Echonomics. A fine choice if you are looking to create a blog that summarizes the work of others. I personally wish there were more such blogs.
  2. Walrasian Magic. Good, but caveats apply.
  3. Supereconomonomics. This could be interpreted as an attempt to one-up me, but more likely you will really be digging yourself into a deeper hole. At your peril.
  4. Economists Mind Their P's and Q's. I would be honored if you named your blog after one of my haikunomics. You could also put this on your department tshirt. It's not as racy as Economists Do It With Models, which may or may not appeal to your economist sensibilities.
  5. Oikonomics. As we all know, "economics" comes from the greek "oikos" meaning household. Besides being a clean throwback to the root, this would be a particularly excellent title for a home economics blog.
  6. Oinkonomics. Perfect for the bacon-loving economist in all of us.
  7. Scenic Moo. If you're more of a beef lover, this one might be for you. As an anagram of Economics, it also sends the message: Economics is Messed Up. (a possible subtitle)

So there, no more excuses. Go start a blog.

Tuesday, March 15, 2011

A Moment of Silence

Let us all take a moment to quietly mourn the recent appearance of an actual regression in a New York Times article.

I was shocked at first, but then I realized this poor little testscore regression is playing the part of the villain, which just made me sad. It is a sorry state of affairs when regressions only show up for the express purpose of looking unclear, hopelessly complicated, disconnected from reality. This regression is a mere punching bag in an anecdote-driven story.

Journalists can be mean, testscore regression. I would be friends with you. (Well, maybe. That would depend on a whole bunch of things that weren't mentioned in this article...and virtually nothing that was).

Q-tip: Trevor

*

Added: Contrary to what the journalist might seem to think, "complicated" does not even begin to suggest "wrong." (The true objective is expected to be complicated!) In any case, in light of his complaints and my previous post, Rules versus objectives, the following occurs to me: It may actually be optimal if this regression is unintelligible to the teacher, who then does not know what particular rules she is being rewarded for following. Even if any set of rules tends to incentivizes suboptimal behavior given full information of those rules, incomplete knowledge of the rules may mean the teacher doesn't have enough knowledge to deviate significantly from pursuing the true objective. When there are many possible sets of rules that all get at the true objective from different angles, the expectation over them will be something in between. It need not be the case, but this "in between" may actually be closer to the true objective than any feasible set of rules!

That is, if you can't figure out how to game the system, you may as well play nice.

Thursday, March 10, 2011

Rules versus objectives

You've probably heard this one before: The principal has some complicated objective he wants the agent to pursue, but he can't get the agent to care about this objective directly. Instead, he can only create a bunch of rules that approximately incentivize the right behavior.

We face versions of this problem all the time, right? Any time we wish upon a genie, to cite one common example. Or, teachers may want their students to develop a deep understanding of the material, but they can only design exams that push students to learn approximately the right sort of stuff. (Some are more successful than others). Employers face a similar problem motivating their employees to do what's best for the company. The government, too, has complicated social objectives, but all it can do is create a set of laws.

The thing is that objectives are rarely simple, but rules generally need to be. Simple to communicate, simple to follow, simple to measure whether people are violating the rules or not. Therefore rules are biased towards being simple, whether the objective that generated them is complicated or not.

Now, with that as a prelude, what interests me is the problem of inferring objectives from rules. What if I, a lowly agent, actually care about the objective, but I can't observe it directly? What if I can only see the rules? Well, given the above, I should be especially leery of inferring simple objectives just because the rules are simple -- after all, the rules would be simple either way!

To be concrete, perhaps I am a benevolent citizen of society, and I'm trying to act in a way that is best for society (i.e. the actual objective of a benevolent government). I know what's legal, but not necessarily what behavior is best (for society). What can I infer from the laws? Hmm.

Well, here's something I can't infer. I can't infer that the laws themselves perfectly represent society's true goals. Is it socially optimal to come to a complete stop at every stop sign? Whether or not it is, you can bet the law will say "Come to a complete stop," because that's the simple statement closest in meaning to the true goal. So, reversing this, we can't infer from "Come to a complete stop" that a complete stop is actually what society wants us to do, right?

I worry that when the objective is not readily observable, people have a tendency to fixate on the rules themselves, to revere the rules as if they are the ultimate goal, as if following the rules is what makes you a "good" citizen. But legality is not the true benchmark of what's best for society, so it does not deserve our reverence. If you roll stop signs, on some level you may feel like you're committing a minor infraction against society -- an infraction that is perhaps justified by your right to care about your own wellbeing -- rather than recognizing that it is probably optimal for society if people don't always come to a complete stop. Society spends a lot of time inculcating this idea, "Obey the Law, it's the right thing to do," but we should really move beyond it where we can. Society doesn't really want people to focus on obeying its laws, just like teachers don't really want students to focus on learning the particular material they can feasibly be tested on.

*

As long as we're here, I will just post a provocative question. Like societal law, religion generally offers simple rules for how to behave. In this context people are very interested in figuring out the actual objective of God, so perhaps they should entertain the possibility that the rules may not perfectly represent his true objective. For many, the rules become the "definition" of right and wrong according to God. But might he not believe in a more complicated notion of right and wrong and still find it optimal to send the same message? Is what he says actually sufficient to determine what motivated him to say it?

Here I am being quite loose, so objections are expected.

Thursday, March 3, 2011

Treading water, in a river

Just a reminder: America is getting richer, but many available consumption bundles are also constantly disappearing. Generally the disappearing bundles are ones that most people don't really want, but there's no economic law that says you have to be getting better off, even if you're one of the people getting richer.

For example, you can't just decide you're happy with your current computer and internet connection, and keep them forever. In 10 years they won't be selling the computer, they may not be selling the internet speed, and even if they were, computer programs and the internet would expect more from you anyway, diminishing your ability to function normally. Sites and programs want to run smoothly, but they also want to have more advanced features, and the minimum system requirements bar is raised as more people become equipped to clear it. You can't continue to consume the "same old" websites when they just don't exist in the same old form any more.

This is somewhat speculative, but my computing experience seems to actually get worse for the last couple years prior to getting a new laptop, even controlling for actual hardware or software degrading within my computer. It's a bit like treading water, in a river. And rivers go downhill.

In a similar vein, when we hear about people in less fortunate parts of the world living on a dime a day, we tend to react by imagining what it would be like if we lived on a dime a day, in America. But even adjusting for purchasing power, that's not quite the right comparison because we actually can't buy the things here that they can buy for a dime. As we got richer, we abandoned a lot of bundles that most people no longer wanted, because they were no longer living on a dime a day. For instance, if you were sufficiently poor, you might be willing to trade regular food for a greater amount of lower-quality (less tasty, say) food. But perhaps sufficiently low-quality food is no longer readily offered in stores here, because so few people want it. And if you want to trade guaranteed high-quality hospital access for more food, forget it! You can't just sell off your American right to walk into any American-quality ER in the event of an accident. (more food, lower-quality health care) is just not a feasible bundle here.

Minority preferences are always at risk of going unserved. If you were looking to give them a welfare boost, I wonder if there could be an advantage to the approach of trying to cater to their preferences rather than giving them a transfer that changed their minority status?


Tangentially related: Tiebout sorting

Tuesday, March 1, 2011

Lying, part 3

In Lying part 1, I argued that in situations where one party wants to communicate the truth to another party who wants it, lying may accomplish that goal. In Lying part 2, I argued that in situations where the first party doesn't want to communicate private information to the second party, lying may accomplish that goal. Today I will argue that there are also situations where the second party doesn't even want (or only thinks it wants) the information that would be communicated in the absence of a lie.

First of all, it's important to quash any notion that more information necessarily makes people better off. There are arguments and counterarguments that could be made, but for most people I don't anticipate this being a controversial point. Most people avoid searching for things like "smallpox" on google images (don't do it!). Most people don't want to be carrying around a memory of how horrible war is -- or how awesome heroin is -- for the rest of their lives. And most people, when they ask how you are, don't actually want to know that you are not fine today...because they barely know you, and it's awkward for both parties, right? (Forget Lying parts 1 and 2...how odd is it to make a rule that says you have to tell the truth when neither of you wants the truth to be told? Just say you're fine). And on a personal note, the maitre d' does not want me to explain that actually my name is Xan, not Shawn...and neither do I, because what purpose does it serve besides wasting 20 seconds of both of our time? Truth can be costly and pointless to deliver; when it's time to put a name down, I for one am quite content to say "Shawn" and move on with my life.

These examples, ranging from the trivial to the intense, are picked from an entire menu of ways information can do us wrong...and while they hardly cover the gamut, I'm hoping they are suggestive of the variety of problems we can run into. (Apologies if you couldn't resist googling smallpox...in which case did I harm you by telling you something?)

Okay. Next, given that information does not necessarily benefit people, it immediately follows that people don't necessarily know what information will benefit them. Because, before they know it, how can they know it's good for them? (How can they know my name isn't worth the bother?) When people ask you a question, when they solicit information from you, they have some expectation about how much they will value that information -- evidently they think it will benefit them. But you have more than an expectation; you actually know it...and if you know them well enough, you may be in a better position to judge whether they would truly benefit from that information. If you care about their wellbeing, this is something to take very seriously. Just deciding to follow a simple rule like, "When someone requests the truth, give it to them"...well, it's certainly an easy rule to follow, but that's not enough in my book.

And once we get to this point, hopefully it's clear that a lie may come in handy, serving a purpose that benefits one or both parties. Just like in part 2, a lie may be the only way of not revealing problematic information. It may be that silence is not an option (after all, I have to say something to the maitre d'), or it may be that silence conveys exactly what we're trying to avoid conveying in the first place: "What's in the box...an iPod?" "No." "...an iPad?" "No." "...an iPhone?" "Umm...no comment?" Wrapping paper is costly, but talk is cheap. If you're going to wrap your gifts with paper, don't be stingy with your words.

Morals

Of course, I'm not really just attacking the most extreme viewpoint, that increased information is always good. The real point is that information doesn't even come close to being always good. The point is to rip off information's holy cloak and actually consider what's underneath...because, once we recognize how far it is from being the ideal, we are forced to take seriously the question of whether and when we really want to pursue it. In truth, information is immensely complicated. In truth, it's hardly obvious what information should and should not be dispensed to whom. In truth, this is a hard problem, not an easy one, and hard problems deserve serious consideration, not simple rules like "Always tell the truth." Information is sometimes our enemy, sometimes our friend, and at times it happens to be best be accessed or avoided by a lie. What's left to say? Lies are not the enemy here; honesty and dishonesty are better thought of as weapons than targets. A knife is not good or bad, it is simply a knife. The question is how to use it.

By the way, I really wasn't sure how much content to put in this post. On the one hand, I worry that it will seem particularly obvious to people who have read it. Nevertheless, my feeling is that people are often unthinkingly cavalier with information, conveniently justified with a simple appeal to the "sanctity" of truth. "Don't knock it till you try it," they say, as if you can just throw it away if you don't like what you learn. And when someone is wrong, do we always have to tell them? It's certainly hard to resist, and this is exactly when it's most tempting to believe that honesty is inherently justified, to ignore the possibility that maybe we should really stop and think about whether we should speak our minds. If the goal is wellbeing, truth is not sacred.

In the end, I figured that actually saying the obvious thing makes it harder to ignore, so I decided to say it a few different ways.

Finally, all of that said, don't get me wrong: truth is sacred to a rational agent whose goal is to understand reality. Just don't forget that in many contexts, information is also a lot like a good, with utility associated with the act of consuming it. And really, without free disposal, why should we expect info to be good for our wellbeing in general? No one wants an annoying song stuck in their head...