Experts are NOT!

“Expert Political Judgment: How Good Is It? How Can We Know?” is a new book by Philip Tetlock which essentially has collected a lot of hard data to prove that most “experts” are no better than you or me. Specifically, he is talking about people who make prediction their business—people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables. And he shows that they are no better at their predictions than average people who read newspapers (i.e. are a little knowledgeable).

See this New Yorker review for details. Excerpt:

“Expert Political Judgment” is not a work of media criticism. Tetlock is a psychologist—he teaches at Berkeley—and his conclusions are based on a long-term study that he began twenty years ago. He picked two hundred and eighty-four people who made their living “commenting or offering advice on political and economic trends,” and he started asking them to assess the probability that various things would or would not come to pass, both in the areas of the world in which they specialized and in areas about which they were not expert. Would there be a nonviolent end to apartheid in South Africa? Would Gorbachev be ousted in a coup? Would the United States go to war in the Persian Gulf? Would Canada disintegrate? (Many experts believed that it would, on the ground that Quebec would succeed in seceding.) And so on. By the end of the study, in 2003, the experts had made 82,361 forecasts. Tetlock also asked questions designed to determine how they reached their judgments, how they reacted when their predictions proved to be wrong, how they evaluated new information that did not support their views, and how they assessed the probability that rival theories and predictions were accurate.

Tetlock got a statistical handle on his task by putting most of the forecasting questions into a “three possible futures” form. The respondents were asked to rate the probability of three alternative outcomes: the persistence of the status quo, more of something (political freedom, economic growth), or less of something (repression, recession). And he measured his experts on two dimensions: how good they were at guessing probabilities (did all the things they said had an x per cent chance of happening happen x per cent of the time?), and how accurate they were at predicting specific outcomes. The results were unimpressive. On the first scale, the experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes—if they had given each possible future a thirty-three-per-cent chance of occurring. Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys, who would have distributed their picks evenly over the three choices.

Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable.

I should go and try to find this book…

Scott Adams’ Golden Happiness Ratio

Scott Adams has an interesting theory on how to be happy – something I totally agree with:

I have a theory that you can predict how happy people are and perhaps how successful by their ability to tolerate imperfection. The Golden Happiness Ratio is about 4/5ths right, also known as “good enough.

Once you achieve about 80% rightness, any extra effort is rarely worth the effort. People who can’t stop until they get to 100% are usually stressed to the point where they can barely function. And don’t expect them to do much multitasking.

See full article.

I like what you like

The New York Times has a very interesting article about herd instinct. The main point it makes is that people tend to like things that they think other people like (or will like). In other words, Himesh Reshammiya is popular because he is popular. Of course, people do have intrinsic likes and dislikes which are independent of what other people think – but equally, if not more more important role is played by the “social” aspect.

And of course, there is research to prove this point.

They created 9 different websites of music by unknown artists. Users of these websites could download and listen to the music. On 8 of those websites, the users could see how often a song had been downloaded by others in the past (from that website only). And on the last one, they had no idea of the popularity of the song. And a bunch of interesting results emerge:

First, if people know what they like regardless of what they think other people like, the most successful songs should draw about the same amount of the total market share in both the independent and social-influence conditions — that is, hits shouldn’t be any bigger just because the people downloading them know what other people downloaded. And second, the very same songs — the “best” ones — should become hits in all social-influence worlds.

What we found, however, was exactly the opposite. In all the social-influence worlds, the most popular songs were much more popular (and the least popular songs were less popular) than in the independent condition. At the same time, however, the particular songs that became hits were different in different worlds, just as cumulative-advantage theory would predict. Introducing social influence into human decision making, in other words, didn’t just make the hits bigger; it also made them more unpredictable.

On an average, they found that a song that was was a top-5 song in terms of intrinsic quality (the 9th website), had only a 50%
chance of making it into the top-5 list by popularity.

So that should explain why shakalakalakalakalakalakalakalaka shakalaka boom boom is assaulting my ears everywhere. And why Aap ka Suroor is even happening.