A few days ago Dan wrote about Don Moore’s research on how we accept advice from others. A lab experiment showed that subjects adhered to advice from confident, not necessarily accurate, sources. The findings of another research, led by Prof. Gregory Berns of Emory University, show another aspect of our reaction to advice.
Berns recorded his subjects’ brain activity with an fMRI machine while they made simulated financial decisions. Each round subjects had to choose between receiving a risk-free payment and trying their chances at a lottery. In some rounds they were presented with an advice from an “expert economist” as to which alternative they consider to be better.
The results are surprising. Expert advice attenuated activity in areas of the brain that correlate with valuation and probability weighting. Simply put, the advice made the brain switch off (at least to a great extent) processes required for financial decision-making. This response, supported by subjects’ actual decisions in the task, are troublesome, perhaps even frightening. The expert advice given in the experiment was suboptimal – meaning the subjects could have done better had they weighted their options themselves. But how could they? Their brains were somewhat dormant.
“For the great majority of mankind are satisfied with appearances, as though they were realities, and are more often influenced by the things that ‘seem’ than by those that ‘are.'”
-16th-century Italian politician Niccolo Machiavelli
It’s something we come across regularly: presentation trumps content. Often what matters is not what we know, or what we have done, but rather how we spin it. It’s why cover letters are so important, and why the peripheral route to persuasion – one of advertising’s biggest weapons – works.
Now, Don Moore of Carnegie Mellon University demonstrated yet another way that we are heavily influenced by delivery — We tend to seek advice from experts who exhibit the most confidence – even when we know they haven’t been particularly accurate in the past.
In his experiment, Don had volunteers guess the weight of people in photographs, and paid them for their correct answers. But before each guess, the volunteers were asked to choose one of four advice-givers (also volunteers) from whom to buy advice. Each advice-giver submitted their weight guess in percentage form, with some advisers spreading out their advice over multiple weight ranges. So, one advisor might have said that there was a 70% chance that the person’s weight was 170-179 pounds, a 15% chance that it was 160-169, and a 15% chance that it was 180-189. A more confident advisor, however, would have put all his eggs in one basket and said there was a 100% chance that the weight was within the 170-179 range.
Now here’s the really important part: in each round, before they chose their adviser, volunteers got to see each adviser’s percentage spread, but not the associated weight ranges. (See this really handy chart for more on the set-up.)
What did Moore find? Volunteers were more likely to buy advice from confident advisers (such as the 100% adviser from above) than those who spread out their percentages. What’s more, this tendency led advisors to make their advice more and more precise in subsequent rounds – but not more accurate.
These findings are troublesome. Because though confidence and accuracy sometimes go hand-in-hand, they don’t necessarily do so. And when we want confident advisors, some will exaggerate to give us what we want. Maybe this is why so many pundits on TV for example exaggerate their certainty?