You are terrible with numbers, unless there is a very good reason why you are different from the subjects of the psychological experiments described below. See this post at the always interesting Overcoming Bias blog:
Then how about this? Yamagishi (1997) showed that subjects judged a disease as more dangerous when it was described as killing 1,286 people out of every 10,000, versus a disease that was 24.14% likely to be fatal. Apparently the mental image of a thousand dead bodies is much more alarming, compared to a single person who’s more likely to survive than not.
But wait, it gets worse.
Suppose an airport must decide whether to spend money to purchase some new equipment, while critics argue that the money should be spent on other aspects of airport safety. Slovic et. al. (2002) presented two groups of subjects with the arguments for and against purchasing the equipment, with a response scale ranging from 0 (would not support at all) to 20 (very strong support). One group saw the measure described as saving 150 lives. The other group saw the measure described as saving 98% of 150 lives. The hypothesis motivating the experiment was that saving 150 lives sounds vaguely good – is that a lot? a little? – while saving 98% of something is clearly very good because 98% is so close to the upper bound of the percentage scale. Lo and behold, saving 150 lives had mean support of 10.4, while saving 98% of 150 lives had mean support of 13.6.
Or consider the report of Denes-Raj and Epstein (1994): Subjects offered an opportunity to win $1 each time they randomly drew a red jelly bean from a bowl, often preferred to draw from a bowl with more red beans and a smaller proportion of red beans. E.g., 7 in 100 was preferred to 1 in 10.
According to Denes-Raj and Epstein, these subjects reported afterward that even though they knew the probabilities were against them, they felt they had a better chance when there were more red beans. This may sound crazy to you, oh Statistically Sophisticated Reader, but if you think more carefully you’ll realize that it makes perfect sense. A 7% probability versus 10% probability may be bad news, but it’s more than made up for by the increased number of red beans. It’s a worse probability, yes, but you’re still more likely to win, you see. You should meditate upon this thought until you attain enlightenment as to how the rest of the planet thinks about probability.
See full article for more examples and references.
In a follow-up article he has another great example. When subjects were asked to choose between a 7/36 chance of winning $9 or a 100% chance of winning $2, only 33% chose to go for the $9. That seems reasonable. But when (a different set of) subjects were asked to choose between these two choices:
Choice 1: 7/36 chance of winning $9 or a 29/36 chance of losing 5¢
Choice 2: 100% chance of winning $2
Strangely, 60.8% of the subjects chose choice 1! Note that this is strictly worse than the corresponding choice in the previous experiment. Apparently,
After all, $9 isn’t a very attractive amount of money, but $9/5¢ is an amazingly attractive win/loss ratio.
You can make a gamble more attractive by adding a strict loss to it! Isn’t psychology fun?
Again the full article contains even more goodies.
Case of Anchoring?
http://en.wikipedia.org/wiki/Anchoring
Shashikant, Some of those are cases of Anchoring. Others are explained by the Affect Heuristic