Thursday, February 3, 2011

Optimal star collection

Amazon reviews don't just reflect product characteristics. They reflect whether the product is worth the price. Is it a good buy? That's what the stars reflect.

Which is interesting, because prices change all the time, but stars stick around, and have positive influence on all future sales. So star collection should influence your pricing strategy, right?

What does optimal star collection look like? If you were going to collect 1000 reviews over the lifetime of your product, then what order would you want the ratings to arrive in? Well, assuming a higher average rating means more sales, you want the best ones first! That way, at every point in time your average rating is as high as it could possibly be. So, taken alone, optimal star collection dictates that you start with a low price and gradually increase it over the lifetime of your product.

Of course, there's another good reason for sellers to vary their prices over time, namely price discrimination. And there are two basic things going on there. First, you may want to fluctuate your prices randomly to separate out the people who are willing to hunt for deals. Second, if you're courting a bunch of impatient early-adopting iEverything fanatics, you want to start your iPhone at $600 and decrease the price over time. But note that this latter effect is in direct opposition to optimal star collection. Tension.

Anyway, Amazon is complicated and I'm still only at the very beginning of thinking about it, but these are at least some of the basic forces to keep in mind. Of course, there are also written reviews accompanying the ratings to give them context, but while many of these reviews will bring up the issue of price ("this was a great buy!"), inspection reveals that few of them actually bother to say what the price was at that time of the review. So actually, if anything, reviews written when price is low would serve only to further bolster future buyers' impression of the price ("$35 must be a good price for a Rubik's cube since everyone says this is a great buy"). So on the face of it, that seems to support optimal star collection, although admittedly there is a lot more going on there. Perhaps for another day.


  1. I am going to take this a different direction and rant about the inadequacy of star ratings.

    What I want to know is why sites offer only this one vague overall star rating. You could make the argument (and not convince me) that it would require too much space or too much cognitive load or whatever to have ratings for various specific qualities on a site like Amazon, but what about movie rating sites? Why just this one overall rating? And why not let you slice it by other covariates, including, perhaps, the people you know?

    In conclusion, I think current overall star ratings are woefully inadequate, and I have been scheming to do something about it. Would love to hear your thoughts on why star ratings are the way they currently are, and how (if at all) they could be improved.

  2. A product can have many characteristics. It can be really convenient to squish them down into 1 dimension according to some rule, but you're right that in doing so, you lose information that might have been useful.

    Personally, I think the Amazon rating system works pretty well. For me, the star rating is more like the GPA on a college application: it's a nice quick summary statistic, and maybe I don't consider items below a certain cutoff (or maybe it determines the order I check out products...that's the nice thing about a 1-dimensional measure, it's definitively ordered), but when I start to seriously consider an item, I look deeper and base it more on the content of the actual written reviews. Could a multidimensional star rating be useful? Maybe if I was trying to find the best item from a large space of contenders, with several different important quality dimensions that I'm considering trading off between. In practice I find that almost never describes my search behavior on Amazon, so is it really worth it? What variables would we split the ratings into, keeping in mind that Amazon has a truly wide variety of products?

    Movie reviews also come with a star rating and a written review. But unlike Amazon, I emphatically *don't* want to read a movie review if I'm actually going to see the movie. And unlike Amazon, movies share a small handful of obvious characteristics we might really want to see separate ratings on.

    Of course, to implement this you would have to get reviewers to actually give up this information...and maybe it's not so easy to shift them out of the equilibrium we're in right now, where everyone's giving and expecting 1-dimensional star ratings. In any case, it's not clear to me that most people actually want to wade through more information...but it should be possible to make it unobtrusive yet available to those who want it, or alternatively we can just do what netflix does and optimize *for* individuals, behind the scenes.

    I am curious what you plan to do about it. As far as I know, all these ratings are currently done in-house. Is there a way to provide flexible ratings technology to these different sites? Possibly there are many ratings features that sites would be happy to have, if only someone else would make it a little easier for them.

    tangentially relevant to this conversation:

  3. I think it’s true that for a lot of products we buy we only really care that people don’t think it totally sucks. But for any decision that people stress over – which could be expensive decisions like cars or laptops, or tricky decisions like restaurants or tennis racquets, etc. etc. – I think losing that information really sucks for me and probably for a lot of other people. Moreover, I want the sites to use the power of the data they have to discriminate—to say what *I* would like based on whatever algorithms are likely to know.

    I think it is a reasonable concern that maybe people just would not want to be bothered to input that level of info, and so maybe it’s not worth asking for it. On the other hand, it seems less burdensome than a lot of the written reviews I see on Amazon and elsewhere. Yeah, I understand a lot of people write reviews for a hobby, whereas quantitative numbers are less personal and meaningful, but I think with the right incentive structure enough people would probably do it. (What the right structure would be, I don’t know.)

    I think there could be potential for a site with detailed ratings on people – kind of like Hot or Not but for more than just hotness. If you have that data across many people and know how they are connected to one another, that would be unbelievable juicy. :-)

  4. It scares me a bit to imagine a world where companies know a lot about *me* and therefore know how better to serve me (though it depends on the relevant information. I have little problem with netflix having intimate knowledge of the types of movies I like). But I like the idea of a world where companies know a lot about certain *types* of people (not necessarily actual individuals) and I have an easy way of locating and accessing the recommendations that correspond to the types of people I'm actually similar to. That seems like a winner to me.

  5. Have you heard of the site Hunch? It asks you to fill out (at least) 20 questions, and then it suggests things that it thinks you will like. (Founded by the same person who did Flickr.) For me, it was creepily accurate.

  6. that sounds intriguing. have you used that to narrow in on things of interest to you? or just as an amusing exercise?