Economics and the maximization of profit (and lies).
When a friend sent me this paper the other day, I admit that I took a long hard look at myself and my economist friends. According to this study, economists, it seems, are worse than most when it comes to truth telling. This discovery was made by researchers Raúl López-Pérez and Eli Spiegelman, who wanted to examine whether certain characteristics (for instance religiosity or gender) made people averse to lying. They measured the preference for honesty by canceling out other motivations, such as altruism or fear of getting caught.
The way they accomplished this was with a very simple experiment where a pair of participants acted as sender and receiver of information. The sender would sit alone in front of a screen that showed either a blue or green circle. He or she would then communicate the circle’s color to the receiver, who could not see the color or the sender. Senders received 15 Euros every time they indicated a green circle, and only 14 when they communicated that the circle was blue. Receivers earned an even 10 euros regardless of the color, and so were unaffected by either the truthfulness or dishonesty of the senders.
So senders had four strategies:
1) Tell the truth when shown a green circle and get the maximum payment;
2) Lie when shown a green circle, choosing a lower payment;
3) Tell the truth when shown a blue circle and receive the lower payment;
4) Lie when shown a blue circle and gain an extra euro.
All was well and good if senders saw a green circle, telling the truth earned them the maximum amount of cash (as you can imagine, option 2) was fairly unpopular). What if they saw blue though? Well, they had two options: tell the truth and lose a euro, or lie and get paid more. The experimenters reasoned that a lie-averse sender would always communicate the circle’s color accurately while senders motivated by maximizing profit would indicate green regardless.
Participants, who were from a wide array of socio-economic and religious backgrounds, also came from a range of majors. Researchers grouped majors together into business and economics, humanities, and other (science, engineering, psych). The results showed little difference in honesty as a factor of socio-demographic characteristics or gender. A student’s major, however, was a different story. As it turned out, those in the humanities, who were the most honest of all, told the perfect truth a little over half the time. The broad group of “other” was a bit less honest with around 40% straight shooters. And how about the business and economics group? They scraped the bottom with a 23% rate of honesty.
Keep in mind that this was one study of one group of people; however, it does indicate that the study of economics makes people less likely to tell the truth for its own sake. And this holds water, economically speaking: 1 euro has clear and measurable value, it can be exchanged for a number of things. The benefit of telling the truth in this situation does not carry any financial value (which is not to say lying in finance is not costly—clearly it is). But rationalization, which we all take part in, may be easier for those who think in terms of opportunity cost and percent profit.
This is not terribly surprising to me in the context of the greater history of economics, which has been characterized by the study of selfishness. The concept of the invisible hand (inherent in the notion of self-correcting markets) holds that people should act selfishly (maximizing their own profits) and that the market will combine all of their actions with an efficient outcome. While it’s true that markets can sometimes accommodate a range of behaviors without failing, if we continue to teach students the benefits and logicality of rational self-interest, what can we really expect?
The Long-Term Effects of Short-Term Emotions
The heat of the moment is a powerful, dangerous thing. We all know this. If we’re happy, we may be overly generous. Maybe we leave a big tip, or buy a boat. If we’re irritated, we may snap. Maybe we rifle off that nasty e-mail to the boss, or punch someone. And for that fleeting second, we feel great. But the regret—and the consequences of that decision—may last years, a whole career, or even a lifetime.
At least the regret will serve us well, right? Lesson learned—maybe.
Maybe not. My friend Eduardo Andrade and I wondered if emotions could influence how people make decisions even after the heat or anxiety or exhilaration wears off. We suspected they could. As research going back to Festinger’s cognitive dissonance theory suggests, the problem with emotional decisions is that our actions loom larger than the conditions under which the decisions were made. When we confront a situation, our mind looks for a precedent among past actions without regard to whether a decision was made in emotional or unemotional circumstances. Which means we end up repeating our mistakes, even after we’ve cooled off.
I said that Eduardo and I wondered if past emotions influence future actions, but, really, we worried about it. If we were right, and recklessly poor emotional decisions guide later “rational” moments, well, then, we’re not terribly sophisticated decision makers, are we?
To test the idea, we needed to observe some emotional decisions. So we annoyed some people, by showing them a five-minute clip from the movie Life as a House, in which an arrogant boss fires an architect who proceeds to smash the firm’s models. We made other subjects happy, by showing them—what else?—a clip from the TV show Friends. (Eduardo’s previous research had established the emotional effects of these clips).
Right after that, we had them play a classic economics game known as the ultimatum game, in which a “sender” (in this case, Eduardo and I) has $20 and offers a “receiver” (the movie watcher) a portion of the money. Some offers are fair (an even split) and some are unfair (you get $5, we get $15). The receiver can either accept or reject the offer. If he rejects it, both sides get nothing.
Traditional economics predicts that people—as rational beings—will accept any offer of money rather than reject an offer and get zero. But behavioral economics shows that people often prefer to lose money in order to punish a person making an unfair offer.
Our findings (published in Organizational Behavior and Human Decision Processes) followed suit, and, interestingly, the effect was amplified among our irritated subjects. Life as a House watchers rejected far more offers than Friends watchers, even though the content of the movie had nothing to do with the offer. Just as a fight at home may sour your mood, increasing the chances that you’ll send a snippy e-mail, being subjected to an annoying movie leads people to reject unfair offers more frequently even though the offer wasn’t the cause of their mood.
Next came the important part. We waited. And when the emotions evoked by the movie were no longer a factor, we had the participants play the game again. Our fears were confirmed. Those who had been annoyed the first time they played the game rejected far more offers this time as well. They were tapping the memory of the decisions they had made earlier, when they were responding under the influence of feeling annoyed. In other words, the tendency to reject offers remained heightened among our Life as a House group—compared with control groups—even when they were no longer irritated.
So now I’m thinking of the manager whose personal portfolio loses 10% of its value in a week (entirely plausible these days). He’s frustrated, angry, nervous—and all the while, he’s making decisions about the day-to-day operations of his group. If he’s forced to attend to those issues right after he looks at his portfolio, he’s liable to make poor decisions, colored by his inner turmoil. Worse, though, those poor decisions become part of the blueprint for his future decisions—part of what his brain considers “the way to act.”
That makes those strategies for making decisions in the heat of the moment even more important: Take a deep breath. Count backward from 10 (or 10,000). Wait until you’ve cooled off. Sleep on it.
If you don’t, you may regret it. Many times over.
surprises from our recent economic history
Reflecting back on our recent economic history bring to my mind a two sad surprises.
Even as a behavioral economist who generally believes in the prevalence of irrationality in our every day life, I place some stock in the main mechanism that should have maintained the efficiency of the financial markets: competition. In principle, the drive for competition among individuals, banks, and financial institutions should get the actors in the market to do the right thing for their clients as they fight to outdo their competition. After the Wall Street fiasco, I expected and hoped that in the spirit of competition some financial institutions would change their way given the new information about the risks they were talking and self-impose restrictions on themselves. I did not expect that they would do so because they were benevolent, but because they wanted to get the business of those who have lost trust in the financial institutions.
Surprise one: Sadly, the forces of competition do not seem to have any effect on the functioning of our financial institutions and Wall Street seems to be back to is pre-fiasco structure.
We are now discussing the possibility of health care reform, which arguably is even more messed up than our financial institutions (about 18 percent of GDP, bad incentives, bad intuitions, and the leading cause for bankruptcy before the current housing problem). When I look at the health care debate, it seems to be fueled by ideological beliefs about the importance of competition and freedom of choice on one hand, and the evilness of regulations and limits on the other. As someone who loves data beyond theories, it is surprising to me how little we know about the effectiveness of different versions of health care, and how sure people are in their own beliefs — which makes it an ideological and not a very useful debate (this is just a small surprise).
But what is the most surprising to me is that the tremendously expensive lessons we have experienced about the efficiency of markets and self interest do not seem to carry to the health care debate. As a society, we still seem to be enamored with the ideology of free markets, and have not seemed to update our beliefs in their efficiency despite the evidence. On the bright side, it looks like behavioral economists will have a lot of work for the foreseeable future.
Asimov on evidence
One of the things that always amazed me about rational economists is that they don’t update their opinions.
In economics there is a very clear way in which all people are supposed to observe new information and based on it update (what is called Bayesian Updating) their understanding of the world. And while Bayesian Updating is a big part of economic theory, economists themselves don’t seem to do any of this Updating based on data about real economic behavior of people.
Of course, no experiment is ever perfect, and there are always more questions and alternative possible interpretations – but that rational economists would not update at all? This is just too odd. Or maybe it is the real proof that we are all somewhat irrational?
Here is what Asimov had to say about believing in data…
“Don’t you believe in flying saucers, they ask me? Don’t you believe in telepathy? – in ancient astronauts? – in the Bermuda triangle? – in life after death?
No, I reply. No, no, no, no, and again no.
One person recently, goaded into desperation by the litany of unrelieved negation, burst out ‘Don’t you believe in anything?’
‘Yes,’ I said. ‘I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I’ll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be.”
Isaac Asimov, The Roving Mind (1997), 43
2008 was a good year for behavioral economics
Before the financial crisis of 2008, it was rather difficult to convince people that we all might have irrational tendencies.
For example, after I gave a presentation at a conference, a fellow I’ll call Mr. Logic (a composite of many people I have debated with over the years) buttonholed me. “I enjoy hearing about all the different kinds of small-scale irrationalities that you demonstrate in your experiments,” he told me, handing me his card. “They’re quite interesting-great stories for cocktail parties.” He paused. “But you don’t understand how things work in the real world. Clearly, when it comes to making important decisions, all of these irrationalities disappear, because when it truly matters, people think carefully about their options before they act. And certainly when it comes to the stock market, where the decisions are critically important, all these irrationalities go away and rationality prevails.”
Given these kinds of responses, I was often left scratching my head, wondering why so many smart people are convinced that irrationality disappears when it comes to important decisions about money. Why do they assume that institutions, competition, and market mechanisms can inoculate us against mistakes? If competition was sufficient to overcome irrationality, wouldn’t that eliminate brawls in sporting competitions, or the irrational self-destructive behaviors of professional athletes? What is it about circumstances involving money and competition that might make people more rational? Do the defenders of rationality believe that we have different brain mechanisms for making small versus large decisions and yet another yet another for dealing with the stock market? Or do they simply have a bone-deep belief that the invisible hand and the wisdom of the markets guarantee optimal behavior under all conditions?
As a social scientist, I’m not sure which model describing human behavior in markets-rational economics, behavioral economics, or something else-is best, and I wish we could set up a series of experiments to figure this out. Unfortunately, since it is basically impossible to do any real experiments with the stock market, I’ve been left befuddled by the deep conviction in the rationality of the market. And I’ve wondered if we really want to build our financial institutions, our legal system, and our policies on such a foundation.
As I was asking myself these questions, something very big happened. Soon after Predictably Irrational was published, in early 2008, the financial world blew to smithereens, like something in a science fiction movie. Alan Greenspan, the formerly much-worshipped chairman of the Federal Reserve, told Congress in October 2008 that he was “shocked” (shocked!) that the markets did not work as anticipated, or automatically self-correct as they were supposed to. He said he made a mistake in assuming that the self-interest of organizations, specifically banks and others, was such that they were capable of protecting their own shareholders. For my part, I was shocked that Greenspan, one of the tireless advocates of deregulation and a true believer in letting market forces have their way, would publicly admit that his assumptions about the rationality of markets were wrong. A few months before this confession, I could never have imagined that Greenspan would utter such a statement. Aside from feeling vindicated, I also felt that Greenspan’s confession was an important step forward. After all, they say that the first step toward recovery is admitting you have a problem.
Still, the terrible loss of homes and jobs has been a very high price to pay for learning that we might not be as rational as Greenspan and other traditional economists had thought. What we’ve learned is that relying on standard economic theory alone as a guiding principle for building markets and institutions might, in fact, be dangerous. It has become tragically clear that the mistakes we all make are not at all random, but part and parcel of the human condition. Worse, our mistakes of judgment can aggregate in the market, sparking a scenario in which, much like an earthquake, no one has any idea what is happening. All of a sudden, it looked as if some people were beginning to understand that the study of small-scale mistakes was not just a source for amusing dinner-table anecdotes. I felt both exonerated and relieved.
While this is a very depressing time for the economy as a whole, and for all of us individually, the turnabout on Greenspan’s part has created new opportunities for behavioral economics, and for those willing to learn and alter the way they think and behave. From crisis comes opportunity, and perhaps this tragedy will cause us to finally accommodate new ideas, and-I hope-begin to rebuild.
Standard vs behavioral economics (Supermen of the Mind)
Pigs replace economics
It’s hard to displace a global economic crisis from headlining the news, but the pigs did it. A n variant of the H1N1 flu virus, associated in our lore with the 1918 flu pandemic, has jumped species and infected humans. There are reported deaths (though numbers and details vary wildly) and cases appear to have spread globally.
The media jumped on this new new new crisis, the politicians around the world thanked Providence for something to distract voters from their ethical lapses and the opportunity to pad their budgets, pharmaceutical stocks rallied, airline stocks tanked, and the conspiracy theories are running wild. The Russians stopped importing pork, even though you don’t get the flu from eating pork.
On the positive side, a few more people started washing their hands. This is a rational response; hygiene is an innovation that works. (Purell and other hand disinfectants work in a pinch, but washing your hands for at least one minute, with a long rinse in running warm water is better.)
Three of our predictable irrationalities give the swine flu story much more impact than it should have — and in this case, it would be better if we were more rational.
One: Unlike the agents in economic models, we have limited memory and limited thinking capacity; to manage it we shift our attention depending on outside information. Or, in non-academese, we pay attention to what’s happening now: things that are recent and things that are repeated often get more attention, even if they are not that important. Because the news focus on the negative (it’s their business model) we keep hearing about the cases discovered, and not about the millions of people who were exposed and didn’t get sick. Which gets us to point two:
Two: We overweigh new risks relative to comparable risks we are accustomed to. Around 100 people per day died in US roads in 2008, an enormous improvement over previous years but still. People obsessing about spending 5 minutes in elevators with others (an infinitesimal chance of contagion) will blithely cross the street against the light to have a artery-clogging triple cheeseburger with fries and then smoke a pack of cigarettes. These things have much higher risks, but because we have grown accustomed to them, we don’t think of the risks. They are not, in the technical term, salient; but they are much more dangerous. Still, their dangers are dry statistics and people are not good with statistics, which gets us to point three:
Three: Brains are wired to work well with stories. And there are many stories one can make from the news reports: pandemics amplified by airport air recycling and global travel; mass extinction followed by anarchy and mayhem; terrorism taking advantage of the burden on the health system; the flu as prelude to alien invasion from Alpha Centauri. Ok, the last one only works around the MIT Media Lab. But we love stories, and forget that the plural of anecdote is not data. Statistics, dry as they may be, give a lot more information than stories.
It is not that this problem is not real and important, I just don’t think that relative to our other problems, it is as big as we are making it to be.
What can we do: as the British said during the Blitz, keep calm and carry on. Take appropriate precautions, wash your hands, and if you’re sick get help and keep out of crowds.
Irrationality is the real invisible hand
Adam Smith first coined the term “The Invisible Hand” in his important book “The Wealth of Nations.” With this term he was trying to capture the idea that the marketplace would be self-regulating. The basic principle of the invisible hand is that though we may be unaware of it, an unseen hand is constantly prodding us along to act in line with what’s best for the whole economy. This means that when this invisible hand exists, when we all pursue our own interest, we end up promoting the public good, and often more effectively than if we had actually and directly intended to do so. This is a beautiful idea, but the question of course is how closely it represents reality.
In 2008, a massive earthquake reduced the financial world to rubble. Standing in the smoke and ash, Alan Greenspan, the former chairman of the Federal Reserve Bank once hailed as “the greatest banker who ever lived,” confessed to Congress that he was “shocked” that the markets did not operate according to his lifelong expectations. He had “made a mistake in presuming that the self-interest of organizations, specifically banks and others, was such that they were best capable of protecting their own shareholders.”
We are now paying a terrible price for our unblinking faith in the power of the invisible hand.
In my mind this experience has taught us that Adam Smith ‘s version of invisible hand does not exist, but that a different version of the invisible hand that is very real, very active, and very dangerous if we don’t learn to recognize it. Perhaps a more accurate description of the invisible hand is that it represents human irrationality. In terms of irrationality the hand that guides our behavior is clearly invisible — after all recent events have demonstrated that we are largely blinded to the ways rationality plays in our lives and our institutions. Moreover it is also clear that irrationality does shape our behavior in many ways, pushing and prodding us along a path can lead to destruction. Whether we’re procrastinating on our medical check-ups, letting our emotions get the best of us, or letting conflicts of interest and short term time horizon ruin the financial market, irrationality is certainly involved.
In Adam Smith’s world the invisible hand was a wonderful force, and the fact it was invisible made no difference whatsoever. The irrational invisible hand is a different story altogether – here we must identify the ways in which irrationality plays tricks on us and make the invisible hand visible!