Sunday, April 28, 2013

Influence, Chapter 4

Chapter 4: Social Proof.  We care a lot about social approval and consensus.

I find this chapter both enlightening and upsetting.  Let's do the enlightening stuff first.

Different strokes for different folks
People look to each other for social cues on the correct behavior.  More than anything, they don't want to be seen as doing something foolish.  If you have a stroke in front of a bunch of people, it is entirely possible that people will:
  1. Keep calm and collected, secretly glancing around to see how others interpret your behavior.  Are you having a problem or are you just taking a nap?
  2. See that everyone else is calm and collected.  Deduce that they ought to be calm and collected.
  3. Ignore the problem like everyone else.
Needless to say, you would prefer that they help you.  And if there was just one person around, he would almost surely rush to your aid.  But in the crowd, that often doesn't happen for some reason.

By now we're all familiar with findings like the above.

Now, an economist might suggest several alternate explanations for the unhelpfulness of the crowd.  Perhaps they each have a diluted  1/nth responsibility for helping; everyone would like help to be provided but they'd rather it be someone else.  We could perhaps even construct a game where the probability of No Help is increasing in n, even though everyone would prefer to help if they were the only person.

But interestingly, psych studies find that help is almost always provided, even in a crowd, when it is perfectly clear that there is an actual problem. That is, the rampant nonparticipation seems to stem mostly from pluralistic ignorance and the fear of looking stupid in front of other people.  These effects are surprisingly strong.  Even a little bit of doubt can prevent people from acting.

So, here is Cialdini's recommendation if you are having a stroke.  Single someone out.  Point.  "You, in the red shirt! Call 911, I need an ambulance!"  Not only does this thrust the full responsibility onto them, it takes their uncertainty out of the equation.  Of course they should call an ambulance when someone asks them to; if you turn out not to need an ambulance, the burden of looking foolish will be on you, not them.

That's good advice.

Apocalypse Now! No, wait...Now!  No wait...Now, join us!
There is also a fascinating account of a Chicago apocalypse cult.  They were originally quite exclusive.  But when their doomsday predictions failed, they immediately turned to conversion efforts in an attempt to achieve social validation for their previous devotion.  If reality doesn't validate you, social validation can substitute for actually being right.

Accidental Suicides
There is also a plausible account of how publicized suicides lead to increased suicide but also increases in suicides disguised as accidents.  Plane and car crashes rise in areas where there was a high-profile suicide in the news.  Now, it isn't remotely surprising that people disguise their suicides as accidents; however, notice that this connection between publicized suicides and subsequent accident rates gives us a way of estimating how many "accidents" are really suicides!

That is, if we assume that suicide publicity shifts suicide rates but not (true) accident rates, we can use that to split accidents into "true accidents" and "suicides."  Without doing this, we could be drastically underestimating suicide rates and overestimating accident rates.

Perhaps a lot more people are unhappy with their lives than we think.  And on the flip side, cars are pretty safe, but perhaps they are a lot safer than the statistics would suggest...

(No, none of this analysis is suggested or reported by Cialdini.  He is more interested in the social influence suicides have, namely that they encourage other people to commit suicide.)

Anyway, this is where I start to diverge from his prescriptions:
  1. It seems an overreaction to recommend avoiding air travel at times when a public suicide has been in the news.  Really?  (He doesn't really give the relevant numbers for me to properly judge this though).
  2. I don't see why it is necessarily a bad thing to publicize suicides, just because they trigger suicides.  It is completely nonobvious that publication of suicides actually increases total suicides (or increases by much) rather than predominately shifting around the timing of suicides.  And even if it does significantly increase total number of suicides, it is not obvious to me that this is a bad thing.  It is not necessarily bad for people who are really suffering to put themselves out of their misery.
Bystander effect, take 12
This chapter also contains the usual (wildly exaggerated) account of Kitty Genovese. (Which is not his fault).


Now, let's get into the stuff that really bothered me.

Sheep cyclones: Redux!  or, Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo.
(I'm sure you will want to revisit that incredible clip of sheep cyclones!
Every chapter of this book has a structure like: Here is a psychological quirk.  It generally makes sense that it exists, but it also creates some problems.  Here are a bunch of examples of such problems.  Finally, here is how to overcome them.

We are influenced by social cues because we rightly care what people think of us, and because what other people are doing is a pretty reliable shortcut for figuring out how to behave ourselves.  In this chapter, he acknowledges that the actions of others are generally useful information, but that herding behavior can go wrong.  He then strongly suggests that with a little awareness we can somehow magically take advantage of valuable social information while ignoring the pitfalls.

Ugh.  Social cues save us from the cost of gathering information for ourselves.  Sometimes this will lead us astray ex post, but in order to know which times, we would have to actually gather the information.  We can't magically know if the information is worth gathering, in advance of gathering the information!

The sad truth is that herding behavior is often individually -- and perhaps even socially -- optimal ex ante even when the outcome happens to be bad ex post.  The sheep in the sheep cyclone, or even the bison that follow each other over the cliff to escape the Indian hunting party, can each be behaving optimally the entire time, given their limited information.  Cialdini looks at the situation and says, "Well, obviously some of the bison should have looked up."  But that only seems obvious because Cialdini has information they don't have.

I won't elaborate on this point any more because this chapter review is already getting long.  Suffice it to say that Cialdini blurs the distinction between individual and social optimality, as well as the distinction between ex ante and ex post optimality.

I don't like this...but I can live with it.

What I can't live with is Cialdini's perspective on canned laughter.  He excoriates it!  It influences us to enjoy sitcoms unduly, he says.  People laugh more and rate a scene funnier if there is a laugh track.  For shame, CBS, you manipulative @#$&!%@s!  I knew The Big Bang Theory wasn't really funny!

One of the big themes of this book is that Cialdini harbors hostility for anyone who would "manipulate" us.  Incredibly, he goes so far as to say that "we should refuse to watch TV programs that use canned laughter."  Down with laugh tracks!

Umm...I think we are really losing sight of the goal here.

Isn't it a good thing if people find a show funnier?  The entire point of a show is to entertain us.  If we enjoy it more with canned laughter, what's wrong with that?

And besides, there's a symmetry here.  A sitcom with laugh track is funnier, a sitcom without one is less funny.  Which one is "correct"?  Neither --- they are just different.  The former is more like being in a theater with a bunch of other people, the latter is more like being at home alone.  Yes, you truly are at home alone, but the whole point of television is to simulate events that aren't actually happening in your home.  It's all just pixels and machine-made sound waves.

Social laughter, even simulated, increases our enjoyment because we are social animals.  And that's nothing to scoff at.

Saturday, April 20, 2013


If you haven't seen this comic from the Oatmeal about mantis shrimp, please, go read it, now.  Because it is awesome.  It also has its problems, like for example the rainbow being a terrible example of how mantis shrimp vision is awesomer than ours.  That is our subject for today, because it's actually really interesting and not at all obvious why the rainbow wouldn't be a thermonuclear bomb of light and beauty to the mighty mantis shrimp.

But first, a primer on color vision.

If you are like most intelligent people, you have made it surprisingly far in life without properly figuring out how color vision works.  It's time to have The Talk!

Exhibit A: This is the color wheel we all learned about in kindergarten:

The color wheel is a sensible way of organizing the relationship between colors: red and yellow make orange, yellow and blue make green, blue and red makes purple.  That's what actually happened, when we mixed our fingerpaints together.

Exhibit B: This is the electromagnetic spectrum we all learned about in high school:

This is also sensible: light is a wave with a wavelength, so of course it should live on a line with varying wavelengths.  And again, orange is between red and yellow, green between yellow and blue, and purple between blue and...wait, what?

A paradox.  Is it a circle or a line? I mean, I know that in reality light lives on a line.  But I also know that when I mix red and blue together I get purple.  How does that make any sense given the above spectrum for visible light?  The color wheel's existence seems deeply inconsistent with how nature works.


And now, the answer to the paradox, which somehow almost nobody knows, even though we are intellectually curious people and vision is our most important sense.

When you look at an orange sweater, a bunch of light of various wavelengths comes into your eyes.  The question is how you process all this light and boil it down to the perception of a color.

The truth is that there are infinitely many different wavelengths coming in at varying intensities.  How much 500nm light is there? How much 501nm light? 501.5?  It's too much to completely record all this info.  So you take a massive shortcut.  In fact your eye has only 3 types of color-sensing cells: red cones, green cones, and blue cones.  A red cone is most sensitive to red light, although nearby wavelengths also trigger it to a lesser degree.  Similarly for green and blue cones.

All these infinitely many wavelengths coming in, then, are basically distilled down to three numbers, (R,G,B). How much are the red, green, and blue cones firing?  That's what gets passed on to the brain.
Observation: The pixels on your computer screen do not bother with other wavelengths besides red, green, and blue light.  Since our eyes are going to collapse incoming light to (R,G,B) anyway, this minimizes energy costs without even losing any generality.  Orange light is an inefficient way to stimulate red cones.  
The brain, in turn, makes up a color perception for each possible combination of (R,G,B).  The orange sweater triggers a big response from the red cones, a medium response from the green ones, and a small response from the blue cones.  The brain identifies this as some sort of orange. 

Note that our little (R,G,B) system is more than sufficient to uniquely identify each color in the rainbow.  That is, shine light of any one wavelength into your eye, and it will trigger a unique combination of (R,G,B) cones firing.

However...there are other combinations of (R,G,B) that no color in the rainbow will trigger...

For example, (high,low,high).  Suppose our red and blue cones are highly triggered, but the green cones are not.  No single color in the rainbow would do that...but why limit ourselves to single colors?  After all, a bunch of red wavelengths and blue wavelengths could be coming in, without any green.  It could happen.

And that is the sensation our brain has labeled "purple."

Colloquially we tend to think of "violet" as a purple, so let me be clear: There are purples that cannot be found anywhere in the rainbow.  Magenta, for example, but also other purples that are more similar to violet.  Pink, too, is a color you won't find in the rainbow.  Lots of red, but also a fair amount of green and blue light.

Actually, there are a lot of (R,G,B) combinations that won't be triggered by any single wavelength of light.  "All the colors of the rainbow" is not, in fact, all the colors.

So we perceive a lot more colors than are in the rainbow.  But even so, we do lose a lot by compressing the incoming infinitely many wavelengths down into just 3 numbers.  Necessarily, many different combinations of wavelengths will get mapped to the same exact color perception by our brains
Observation: We are tangentially now able to answer the age-old question of whether people "perceive colors differently."  The naive answer (which everyone gives) is that your "red" might be different from my "red" but that we would never be able to verify these subjective experiences.  But actually it should be easy to prove that colors are perceived differently.  Just find two bundles of wavelengths that trigger the same (R,G,B) experience in Person A but not in Person B!
For example, let's label:
Bundle A: 10 units of 400nm light + 20 units of 450nm light
Bundle B: 15 units of 370nm light + 24 units of 470nm light 
Each bundle of wavelengths triggers a color perception.  The idea is to carefully choose the bundles so that Person A perceives Bundles A and B as the same color, while Person B does not.   
This is a clever solution because it eliminates the need for subjective comparison across people.   Maybe Person A and B have the "same" subjective perception of Bundle A.  Maybe they have the same perception of B.  But these can't both be true simultaneously, if they also disagree about whether Bundle A and B are different! 
We all map many different light combos to each color, but surely we don't do so in the exact same way.  (Fact: my own left and right eyes don't even agree with each other...). 

OK.  So yeah, we are throwing away a lot of information about the world, compressing all the incoming visible light into 3 numbers.


Adding cones can accomplish two things.  Adding IR or UV cones will expand the range of visible light.  Adding more cones between existing cones doesn't expand the range of visible light, but it does increase the number of light bundles we can distinguish and hence the number of colors we perceive.

With our 3 cones, we can pick out combinations like (high,low,high) that have no analog in the rainbow.  As the number of cones grows, the number of distinguishable combinations which have no analog in the visible spectrum...grows exponentially.

With its sixteen cones, the mantis shrimp presumably sees a special color corresponding to
among many, many other colors.

However.  He does not see this color when he looks at the rainbow.  A rainbow is a collection of single wavelengths, not combinations!  The mantis shrimp may see some IR and UV around the edges of the rainbow, but overall his rainbow-viewing experience is not much different from ours, at least not on the basis of his extra cones.  It's not a THERMONUCLEAR BOMB of light and beauty.

The extra colors he sees in the rainbow are proportional to the expanded width of his visible spectrum.   It seems to me that this is nothing compared to the exponential increase in the number of different colors he can perceive as a function of having over 5 times as many cones as us.

Disclaimer: I am not a mantis shrimp.  I am not even a human with formal training in mantis shrimps or mantis shrimp optics.  The above represents only my current level of understanding of how color vision works.  If you are a mantis shrimp and you do equate rainbows to thermonuclear bombs of light and beauty, by all means, drop me a comment below!  I accept no liability for any keyboard destruction brought on by angry mantis shrimp leaving angry internet comments with their smashing claws.

Thursday, April 18, 2013

Influence, Chapter 3

Chapter 3: Commitment and Consistency

People are driven to appear consistent with their prior actions.  This is such a powerful lever compared to many of the other compliance techniques discussed in the book, because a seemingly tiny upfront action can commit someone to a consistent, but much larger, action later.

Getting people to take a stand -- to make a statement on record -- will compel them to be consistent with their statements later on.  This is especially terrifying because we are so accustomed to telling little lies about our true preferences, in matters that seem of no immediate import.

Suppose you want to recruit volunteers for the American Cancer Society.  The direct approach is to call people up and ask them to volunteer a few hours of their time.  The sneaky, evil, clever, terrifying approach is to first ask them to predict what they would say if asked to volunteer a few hours of their time. A few days later, you call back and ask them to volunteer.  Volunteer rates go up 700%.  The original request appears to be a no-strings survey, so what's the harm in pretending you are a little more willing than you really are?  Only later does the real request come, and by then many people feel compelled to be consistent with their original statements.

Or, consider this terrifying toy trick: In the lead-up to the holiday season, Dastardly Toy Co runs a slew of ads hyping Toy X so all the kids go crazy about it.  They beg their parents for Toy X, explicitly or implicitly extracting holiday promises.  But Dastardly Toy Co intentionally stocks only limited quantities of the toy for the holidays.  Most parents are forced to buy their children some other Toy Y instead.

You have probably observed these manufacturing shortages around the holidays before.  I have seen several economic explanations of them: that production costs are too high to match the insane ramp up in demand, or that shortages cause hype which is good for brand image in the long term.

Cialdini's far more devious explanation is that, immediately after the holidays, Dastardly Toy Co runs another slew of ads for Toy X, at which point the children are absolutely desperate to have it.  They run to their parents.  "It's all I wanted for Christmas!  You promised!"  And what can a parent say to that?  The commitment is already made, the parent has to be consistent.  In this way, Dastardly Toy Co manages to keep sales high in January when, really, everyone is sick of buying toys.

An underlying assumption here is that, by reducing the supply of Toy X, Dastardly Toy Co doesn't lose many customers in December -- rather, they mostly just buy different toys from Dastardly Toy Co.  This makes sense if

  • there are few toy suppliers, possibly colluding to jointly limit the stocks of their most popular items, or
  • Toy X has obvious substitutes that are also made by Dastardly Toy Co. For example, if Toy X is a certain Lego set, then the obvious thing to buy when it's out of stock is another Lego set.  Whereas, what do you buy when the stores run out of those creepy Furbies?

This theory makes some empirical predictions that could be fun to test.

Cialdini also discusses the power of public commitment.  Juries are much more likely to be hung if initial votes are done by show of hands rather than secret ballots.  I have said before that you are much more likely to convince someone of your opinion if you don't force them to explicitly spell out their position at the beginning of the debate.

There's a lot more stuff in this chapter, but I'll stop here.  I suspect the urge for consistency must be especially strong with economists.  Philosophically, economists are pretty open-minded about what a person's preferences may be, but they demand the person behave in a manner consistent with those preferences.  Which implies, as a corollary, that those actions ought to be consistent with each other.  Which implies that inconsistency is evidence of failure to optimize.  I'm not saying preferences aren't allowed to change, but absent a reason for change, actions that contradict each other are evidence of a thinking mistake.  (The horror!)

Tuesday, April 16, 2013

Influence, Chapter 2

Chapter 2: Reciprocation

This chapter is about the powerful social urge to return favors.  The feeling of owing something will drive us to be surprisingly accommodating to future requests.

This is partly a natural and good and healthy impulse.  Reciprocation is social glue.  We are nice to people and they are nice back.  You can think of being nice as a sort of "social transfer" similar to a monetary loan. When we do someone a favor, we can expect them to return it down the road.  Being nice is partly an investment in future social capital, and long before we had any formal monetary system or even a way of storing wealth, our capacity to keep track of the complex social web of relationships must have been a huge advantage for human society.

And it still is.  Don't get me wrong, money has a lot going for it.  For one thing, it enables a truly vast network of people -- orders of magnitude larger than the networks that social ties can sustain -- to cooperate with each other.  But social currency is pretty awesome too.  For one thing, whereas most people don't love their jobs, we tend to genuinely enjoy earning social currency.  We are programmed to enjoy being nice to people we know.

(Of course, the people most in need of your help are probably not people you know.  But if you are like most people, you don't enjoy giving to them nearly as much as you enjoy giving to people you know, partly because doing so will not build up your social capital).

But let's get back to the book.  The impulse to reciprocate is strong for good reason, but it can also be abused by people who want us to comply with their wishes.  There are several sneaky ways to do this.

First, reciprocation inherently does not seem to be one-for-one, zero sum.  If you give someone a flower, you can get more than a flower's-worth in return.  I imagine this is especially driven by uncertainty over what the flower costs.  (People are willing to pay extra to make sure there is no perception that they are shortchanging you).  The example here is the Krishnas, who famously used to give flowers in the airport before immediately asking for a donation.  Many people would reciprocate with donations even though they didn't even want the flowers.  (We know they didn't want the flowers because they would promptly throw them in the trash.  And then the Krishnas would take them out of the garbage and do it again!)

For the Krishnas, a donation could be in any amount.  But one could also gain on net by offering a small favor and engineering it so the only way to discharge the debt is through a larger favor.  For example, you might give someone a Coke and then later ask them to buy raffle tickets, each of which costs more than a Coke.  Or you might offer free samples of a product, where the only way to "return" the favor is to buy the actual product.

A more devious and interesting approach is the reject-then-retreat method.  In experiments, Cialdini found that by offering an initial deal that was likely to be rejected, they could substantially increase the percentage of people who agreed to another, milder deal.  We could frame concessions within a bargaining process as a sort of favor which the counter-bargainer feels compelled to return.  The experiments were inspired by the example of a Boy Scout with whom the following sequence of events took place (I am paraphrasing here):
Boy Scout: Would you like this $5 circus ticket?
Cialdini: No.
Boy Scout: Well in that case would you like to buy some $1 chocolate bars.
Cialdini: OK...I'll take a couple.
The idea is that the Boy Scout first offers an expensive ticket and then makes a concession by offering cheaper chocolate bars.  Cialdini first rejects the offer for the ticket, but then he feels compelled to make a concession as well.  The only way to back off from rejection, though, is to buy a nonzero amount of chocolate. Even though he doesn't like chocolate.

To be honest, I think a similar thing happened to me with pizza once. Even though I am lactose intolerant.

The thing is, understanding that social pressure is being exerted does not make you immune to it.  You don't magically stop caring what other people think about you, just because you wish they weren't using their opinions to apply pressure.  That said, it can be fun to rebel against social influence that you disagree with.

A final takeaway -- although Cialdini does not frame it in this way -- is that many social interactions can be framed as a bargaining process, and concessions toward the middle automatically make the outcome seem more favorable to the subject.  Experiments show that concessions toward the middle make the subject feel more responsible for the outcome, and more satisfied with it too, even when the outcome is exactly the same.

Monday, April 8, 2013

Influence, Chapter 1

I finally got around to reading Robert Cialdini's Influence: The Psychology of Persuasion.  This classic is recommended reading for anyone, but especially social scientists.  In this series, I summarize the book and inject its contents with economics.


Chapter 1 is about the contrast principle.  In a nutshell, people systematically overestimate differences.  
  • A realtor might intentionally show you a crummy house before showing you a nice one.  If all goes to plan, you will have an unbiased opinion of the first house, and you will subsequently overestimate the difference in quality between the houses, meaning you will overestimate the absolute quality of the second house.
The contrast principle means that your estimate of the second house is biased away from the first house.  

(By the way, if you're looking to purchase some real estate, these days it is easy to do a lot of the preliminary work yourself.  The internet makes it easy to cut the realtor out of the initial stages of figuring out which properties you are interested in.)

The sequential, asymmetric nature of the principle comes out in full force when we consider sequential purchasing decisions:
  • Suppose you want to buy both a sweater and a suit.  The salesman will want to sell you the $400 suit first, after which a $70 sweater will seem cheap.  The sweater doesn't have the reverse effect on the suit because by then you've already bought (or committed to buying) the suit and written it off in your mind.
  • When buying a car, the salesman will steer you to fix on a base price before even considering the mountain of options.
  • In another vein, if you have some bad news to report, you might think it's better to build up to it, to cushion the shock.  But the contrast principle suggests that putting the bad news last will actually exacerbate the shock and exaggerate the severity of the bad news.

To an economist, what seems to be missing here is a discussion of the subject's prior beliefs.  Suppose house quality is measured on a scale of 1 to 10, with the crummy house a 2 and the nice house an 8.  Cialdini's story is:
  • The subject sees a 2, recognizing it as a 2.
  • The subject then sees an 8, instead thinking it is a 9 due to the contrast principle and having first seen a 2.
But what about the subject's prior expectations about house quality?  I'm not saying the contrast principle is "wrong," but it could be that it works by contrasting house qualities with prior expectations of house quality, rather than the prior house viewed.  Consider this alternative story:
  • Beforehand the subject expects to see a house of quality 5.
  • Subject sees a 2, consequently revising prior down to 4.
  • Subject sees an 8, instead thinking it is a 9 due to the contrast principle and having had a prior of 4.    
These stories give the same qualitative result here.  The realtor should show the 2 first.  But that doesn't have to be the case. What if the crummy house were not a 2 but instead a 7, higher than the prior of 5?

Now in the second story, showing a 7 would raise the prior from 5 to 6, reducing the contrast between the prior and the nice house with quality 8.  This contradicts Cialdini's version, which says that if you have a 7 and an 8, showing the 7 first will raise the subject's impression of the 8.

That is an easy and important experiment to run, and an obvious one to economists, I think.  Philosophically, this is a good example of economic versus psychology paradigms (though not necessarily a criticism of either).  It is very difficult to generate bias on average within an economic model.  Cialdini's version of the contrast principle creates bias on average, while my version potentially does not.

Perhaps, it is best to build up to bad news after all, if you are looking to cushion the blow.  Really bad news is usually unexpected, so you might want to move their prior in that direction more gently first.