Thursday, May 19, 2011

Blind spot alert!

After my Blind spots post from a few days ago, I heard some interesting thoughts from a few people. I was calling for a list of blind spots of each social science, so that people could quickly gauge how trustworthy the results from other fields were. This suggestion, from Justin, is better than a passive list:
I can imagine a browser extension that literally has a checklist of common errors/blind spots, and members who read a piece of journalism or read an academic article can check off what they think the problems are, then other people browsing the web can see what the possible errors are with the article...
I think this (or something like it) would be great. For these reasons:
  1. For present purposes, it could be a lot better than reading the comments section of an article (should one exist). The whole point is to be able to tell, at a glance, the trusthworthiness of the results. By collecting data from people in an organized, standardized way, we can construct simple measures that aggregate that information into something useful.
  2. It seems this would not require any participation on the part of the news site or blog, which is pretty important. First, that means it's universal, rather than contingent on any given site adopting it; people can use it everywhere, and so they'll be more willing to use it anywhere. Second, as semi-loyal economonomist Kris reminds me, a huge amount of distortion (be it willful or inept) happens between the scientist's paper and the reporter's article about it. Not sure reporters would be too glad to have people calling them out all the time.
  3. Enough people will want to participate. (yes? no?) Any high-volume news site draws thousands of readers, many of whom have strong opinions on a given article. Especially if there's no comments section, some of them may feel compelled to register their concerns using our mechanism.
  4. It's not a simple problem, but I think we could rig it up so that we actually extract useful information from people. Of course people are going to have strong personal biases, but even just a simple measure like having an option to check off your profession would go a long way to helping us make sense of the data. If you were interested in what the economists thought of a certain article, you could configure it to show their opinions. And I don't think other people would be compelled to pretend they're economists.
I would also want to know whether the opinion came from someone who was actually familiar with the work firsthand; again, easily solicitable, and not necessarily something people'd be tempted to lie about.

The structure of the information collection mechanism would be crucial. Right now I'm vaguely imagining something between a checklist and a comments section. Admittedly a long way from designing a perfect mechanism, but already I think the idea has promise. What do you think? Would you use this? What would you want from such a mechanism?

(Sadly, we can't just invoke the Revelation Principle and call it a day...)

1 comment: