Over the last few years, many individuals (myself included) have been feeling tremendous anger about the level of executive compensation in the US, an anger that is particularly strong against those in the financial sector. As you can see in the chart below, there is an incredible gap between CEO compensation in the US compared to most other countries. Bankers are paying themselves exorbitant wads of cash, seen in both in their salaries and bonuses. And in their defense, the bankers in question claim that such extravagant wages are essential to properly motivate them, and that without such motivation they would just go and find a job somewhere else (they never exactly specify which jobs they will get and who will pay them more, but this is another matter).
Inordinate compensation levels aside, it is important to try and figure out more generally how payment translates into motivation and performance. There is a general assumption that more money is more motivating and that we can improve job performance by simply paying people more either in terms of a base salary, or even better as a performance-based incentive – which are of course bonuses. But, is this an efficient way to compensate people and drive them to be the best that they can be?
A new paper* by Mike Norton and his collaborators sheds a very interesting light on the ways that organizations should use money to motivate their employees, boost morale and improve performance – benefiting both employees and their organizations. The researchers looks at a few ways that money can be spent and how that affects outcomes such as employee wellbeing, job satisfaction and actual job performance. Specifically, they examine the effect of prosocial incentives, where people spend money on others rather than themselves, and they find that there are many benefits to spending money on others (think about the inherent joy of gift-giving).
In the first experiment, the researchers gave charity vouchers worth $25 or $50 to Australian bank employees and asked them to donate the money to a charity of their choice. Compared to people who did not receive the charity vouchers, those who donated $50 (but not $25) claimed to be happier and more satisfied with their jobs.
The second experiment took the concept of prosocial incentives a step further by directly comparing people who were asked to spend money on themselves (a personal incentive) with those who were asked to purchase a gift for a teammate (a prosocial incentive). This experiment took place in two different settings — with sales and sports teams — and looked at a broader range of outcomes. It not only examined employee satisfaction, but also the other side – benefits to the organization in terms of employee performance and return on investment. While neither sales nor sports teams improved when people were given money to spend on themselves, Norton and his colleagues found vast improvements for those who engaged in prosocial spending. While they were purchasing a gift for a teammate, they also became more interested in their teammate and were happier to help them further in multiple other ways.
If we compare these experiments, we can also see that while a gift of $25 did not make a difference when it was donated to a faceless and impersonal charity, a gift of $20 provided numerous positive outcomes when it was given in the form of helping out a teammate. Thus, it appears that we can reap the greatest benefits when we spend money on others, and even more when we spend money on close others.
Taken together, these results also suggest that our intuitions are leading us down the wrong path when we assume that we will be happiest and most motivated when we earn money to spend on ourselves. The findings from this paper can be extended to recommendations for current business practices, particularly in cases where compensation is very high. In fact, Credit Suisse has gotten a head start on adopting the idea of prosocial spending, as it has recently implemented a program requiring its employees to donate at least 2.5% of their bonuses to charity. Now, is this just a PR trick to try and diffuse some of the anger that people feel these days about bankers, or is this a real effort to increase and improve motivation? I don’t know. But what is clear to me is that prosocial incentives, either in the form of charitable donations or team expenditures, can be an effective means of encouraging more positive behavior for the individual, their teammate and for society.
* Norton MI, Anik L, Aknin LB, Dunn EW & Quoidbach J (manuscript under review). Prosocial Incentives Increase Employee Satisfaction and Team Performance.
There are a few topics that Mother Teresa and Joseph Stalin agreed on, other than the cause for human apathy. So I suspect that both would be surprised – as I am — about the reaction to the BP oil spill.
If six months ago someone were to describe to me a tremendous oil spill and ask me to predict our collective reaction to it, I would have said that we would be highly interested in this disaster for a week or two and, after that short time, our interest would dwindle to “mildly interested.” After all, we (the public) appear only vaguely interested in a whole slew of environmental issues. The destruction of the Amazon rainforest, for example, has been going on for decades. Since 1970 we’ve managed to destroy about 600,000 square miles (www.mongabay.com/brazil.html), but we’re so used to these kinds of statistics that no one seems to care much.
So, why is it that we care so much about the BP oil spill than what happens on a daily basis in the Amazon? Here’s what we know about human caring and compassion. First and foremost, it is based on our emotions rather than our reasoning. Joseph Stalin said, “One death is a tragedy, a million is a statistic.” Mother Teresa said, “If I look at the masses I will never act, but if I look at the one I will.” In oil spill terms: We see pelicans and turtles mired and dying in oil, and we want to cry. We hear about families who have had their homes ruined and their livelihoods horribly affected or even destroyed, and we sympathize with their helplessness and want to do something to help them recover. Our compassion isn’t necessarily proportional to the magnitude of the catastrophe. It depends on how much of our emotion is invoked.
Perhaps I’m mistaken about human apathy, but it is also possible that there are particular features of the BP oil spill that influence how much we care, and that if these features were different, we would care substantially less, even if the magnitude of the disaster were the same.
Here are a few characteristics that might differentiate the BP oil spill from the destruction of the Amazon. First, it is a singular event with a precise beginning. Second, while the tragedy was ongoing (and we are not yet sure if it has ended or not) it seemed to become more desperate by the day. Third, we have a single organization that we can villainize. In contrast, in the Amazon, there are many organizations and individuals at fault, both in the countries where deforestation is occurring and abroad. And fourth, the Gulf is so much closer to home (at least for Americans).
The BP oil spill is, of course, a hugely devastating tragedy. At this stage, we don’t fully understand the magnitude of its consequences, which will likely last for decades. At the same time, it might be worthwhile to take this moment in history as an opportunity – when are caring about this tragedy is still high – to reflect on our larger relationship with the oceans, and the apathy with which we generally greet the less dramatic, but perhaps equally devastating, environmental consequences of overfishing and “everyday pollution.”
I suspect that, because our abuse of the oceans is commonly the result of many small steps by many people, we fail to become enraged with either the process or the outcomes. But we should be. And we should do our best to take better care of our oceans, and not only when the pollution is caused by a single large, easily villainized organization.
Maybe this is another case in which we want to make sure that we don’t waste a really good crisis (for a related missed opportunity see financial crisis). Maybe it is time to look more broadly at our interactions with the oceans and make it a better long-term relationship, and maybe we need to do this while we still care, and before our interest in the oceans dissipates.
There is a certain perverse pleasure in contemplating the perfect crime.
You can apply your ingenuity to the hypothetical issues of choosing a target, evading surveillance and law enforcement, dealing with contingencies and covering your tracks afterward. You can prove to yourself what an accomplished criminal mastermind you would be, if you so chose.
The perfect crime usually takes the form of a bank robbery in which the criminals cleverly bypass all security systems using neat gadgets, rappelling wires and knowledge they’ve acquired over several weeks of casing the joint. This seems to be an ideal crime because we can applaud the criminals’ cunning, intelligence and resourcefulness.
But it’s not quite perfect. After all, contingencies by definition depend on chance, and therefore can’t ever be perfectly thought out (and in all good bank-robber movies, the thieves either almost get caught or do). Even if the chances of being caught are close to zero, do we really want to call this a perfect crime? The authorities are likely to take it very seriously, and respond accordingly with harsh punishment. In this light, the 0.001 percent chance of getting caught might not seem like a lot, but if you take into account the severity of punishment, such crimes suddenly seem much less perfect.
In my mind, the perfect crime is one that not only yields more money, but is one where, if by some small chance you did get caught, no one would care, and the punishment would be negligible.
So, with this new knowledge how would you go about it?
First, the crime would need to be obscure and confusing, making it difficult to detect. Breaking a window and stealing jewelry is too straightforward. Second, the crime should involve many people engaging in the same type of crime so that no one can point a finger at you. This is why looting, though easy to detect, is much more difficult to get a handle on than a single robbery. Third, your crime will need to fall under the shady umbrella of plausible deniability so that if you do get caught, you can always say you didn’t know it was wrong in the first place. With this kind of defense, even if the public cares, the legal system may let you off easy. Moreover, plausible deniability allows you to apologize in the aftermath and ask forgiveness for your “mistake.”
If you really want to go all out, do something you can spin in a positive light, and maybe even create an ideology around it. This way you can then explain how you’re actually on the side of progress. Say, for instance, you’re “providing liquidity” and “lubricating the market” and thereby helping the economy – even if it happens to be by taking people’s money. You can also resort to opaque and promising-sounding language to make your case; you’re “restoring equilibrium,” “eliminating arbitrage” and creating “opportunity” and “efficiency” across the board.
Basically, just bottle snake oil and tell them it will cure, rather than cause, blindness.
Something to avoid, on the other hand, is anything involving an identifiable victim with whom people can sympathize and feel sorry for. Don’t rob one little old lady blind, or any one individual for that matter. It’s part of human nature that we care so much about blue-collar crime, even though the average burglary only costs about $1,300 (according to 2004 FBI crime reports), of which the criminal only nets a few hundred. Crimes like burglaries are the least ideal crime: they’re simple, detectable, perpetrated by a single or just a few people. They create an obvious victim and can’t be cloaked in rhetoric. Instead, what you should aim for is to steal a little bit of money from as many people as possible—little, old or otherwise — it doesn’t matter, as long as you don’t reverse the fortune of any one individual. After all, when lots of individuals suffer just a bit, people won’t mind as much.
So, what is the ideal crime? Which activity is difficult to detect, involves many people, has plausible deniability, can be supported by an ideology and affects many people just a bit? Yes, I think you know the answer, and it does involve banks…
Seriously, what we have here is a problem with our priorities. We have tremendous regulations for what is legal and illegal in the domain of possessions and blue-collar crime. But, what about regulations in banking? It is not that I really think that bankers plan and plot crimes for a living (I don’t), but I do think they are continuously faced with tremendous conflicts of interests, and as a consequence they see reality in a way that fits their own wallets and not their clients. The recent turmoil in the market is just a symptom of this conflict of interest problem, and unless we remove conflicts of interests from the banking system, we are going to be part of a long stream of perfect crimes.
A few years ago, a marketing team from a major consumer goods company came to my lab eager to test some new pricing mechanisms using principles of behavioral economics. We decided to start by testing the allure of “free,” a subject my students and I had been studying. I was excited: The company would gain insights into its customers’ decision making, and we’d get useful data for our academic work. The team agreed to create multiple websites with different offers and pricing and then observe how each worked out in terms of appeal, orders, and revenue.
Several months later, right before we were due to go live, we had a meeting about the final details of the experiment—this time with a bigger entourage from marketing. One of the new members noted that because we were extending differing offers, some customers might buy a product that was not ideal for them, spend too much money, or get a worse deal overall than others. He was correct, of course. In any experiment, someone gets the short end of the stick. Take clinical medical trials, I said to the team. When testing chemotherapy treatments, some patients suffer more so that, down the road, others might suffer less. I hoped this put it in perspective. Fortunately, I said, price testing household products requires far less suffering than chemo trials.
But I could tell I was losing them. In a sense, I was impressed. It was a beautiful human sentiment they were conveying: We care about all customers and don’t want to treat any one of them unfairly. A debate ensued among the group: Are we willing to sacrifice some customers “just” to learn how the new pricing approaches work?
They hedged. They asked me what I thought the best approach was. I told them that I was willing to share my intuition but that intuition is a remarkably bad thing to rely on. Only an experiment gives you the evidence you need. In the end, it wasn’t enough to convince them, and they called off the project.
This is a typical case, I’ve found. I’ve often tried to help companies do experiments, and usually I fail spectacularly. I remember one company that was having trouble getting its bonuses right. I suggested they do some experiments, or at least a survey. The HR staff said no, it was a miserable time in the company. Everyone was unhappy, and management didn’t want to add to the trouble by messing with people’s bonuses merely for the sake of learning. But the employees are already unhappy, I thought, and the experiments would have provided evidence for how to make them less so in the years to come. How is that a bad idea?
Companies pay amazing amounts of money to get answers from consultants with overdeveloped confidence in their own intuition. Managers rely on focus groups—a dozen people riffing on something they know little about—to set strategies. And yet, companies won’t experiment to find evidence of the right way forward.
I think this irrational behavior stems from two sources. One is the nature of experiments themselves. As the people at the consumer goods firm pointed out, experiments require short-term losses for long-term gains. Companies (and people) are notoriously bad at making those trade-offs. Second, there’s the false sense of security that heeding experts provides. When we pay consultants, we get an answer from them and not a list of experiments to conduct. We tend to value answers over questions because answers allow us to take action, while questions mean that we need to keep thinking. Never mind that asking good questions and gathering evidence usually guides us to better answers.
Despite the fact that it goes against how business works, experimentation is making headway at some companies. Scott Cook, the founder of Intuit, tells me he’s trying to create a culture of experimentation in which failing is perfectly fine. Whatever happens, he tells his staff, you’re doing right because you’ve created evidence, which is better than anyone’s intuition. He says the organization is buzzing with experiments.
And so is that consumer goods company. A group there is studying consumer psychology and behavioral economics and is amassing evidence that’s impressive by any academic standard. Years after our false start, they’re recognizing the dangers of relying on intuition.