NYT review of “The Upside”
The New York Times Sunday Book Review, just published a review of the Upside of Irrationality.
In general I think that the review is very good, but there is one point that made me wonder (and of course I focused on the one point that was less positive in the review).
One of the main differences between “The Upside” and PI is that this time around I wrote in a much more personal way about some of my experiences and how they got me thinking differently about different aspects of life (dating, adaptation, pain etc). It was very hard to write this way, and while writing I kept on wondering if this is a good approach to write or not. The reviewer from the NYT reaction was that I was overly personal in my descriptions, and maybe she was correct…
Either way it would be nice to find out the reaction to this approach — is writing in a more personal way, useful or distracting? I would love to get any feedback on this.
Thanks
Irrationally yours
Dan
How to commit the perfect crime
There is a certain perverse pleasure in contemplating the perfect crime.
You can apply your ingenuity to the hypothetical issues of choosing a target, evading surveillance and law enforcement, dealing with contingencies and covering your tracks afterward. You can prove to yourself what an accomplished criminal mastermind you would be, if you so chose.
The perfect crime usually takes the form of a bank robbery in which the criminals cleverly bypass all security systems using neat gadgets, rappelling wires and knowledge they’ve acquired over several weeks of casing the joint. This seems to be an ideal crime because we can applaud the criminals’ cunning, intelligence and resourcefulness.
But it’s not quite perfect. After all, contingencies by definition depend on chance, and therefore can’t ever be perfectly thought out (and in all good bank-robber movies, the thieves either almost get caught or do). Even if the chances of being caught are close to zero, do we really want to call this a perfect crime? The authorities are likely to take it very seriously, and respond accordingly with harsh punishment. In this light, the 0.001 percent chance of getting caught might not seem like a lot, but if you take into account the severity of punishment, such crimes suddenly seem much less perfect.
In my mind, the perfect crime is one that not only yields more money, but is one where, if by some small chance you did get caught, no one would care, and the punishment would be negligible.
So, with this new knowledge how would you go about it?
First, the crime would need to be obscure and confusing, making it difficult to detect. Breaking a window and stealing jewelry is too straightforward. Second, the crime should involve many people engaging in the same type of crime so that no one can point a finger at you. This is why looting, though easy to detect, is much more difficult to get a handle on than a single robbery. Third, your crime will need to fall under the shady umbrella of plausible deniability so that if you do get caught, you can always say you didn’t know it was wrong in the first place. With this kind of defense, even if the public cares, the legal system may let you off easy. Moreover, plausible deniability allows you to apologize in the aftermath and ask forgiveness for your “mistake.”
If you really want to go all out, do something you can spin in a positive light, and maybe even create an ideology around it. This way you can then explain how you’re actually on the side of progress. Say, for instance, you’re “providing liquidity” and “lubricating the market” and thereby helping the economy – even if it happens to be by taking people’s money. You can also resort to opaque and promising-sounding language to make your case; you’re “restoring equilibrium,” “eliminating arbitrage” and creating “opportunity” and “efficiency” across the board.
Basically, just bottle snake oil and tell them it will cure, rather than cause, blindness.
Something to avoid, on the other hand, is anything involving an identifiable victim with whom people can sympathize and feel sorry for. Don’t rob one little old lady blind, or any one individual for that matter. It’s part of human nature that we care so much about blue-collar crime, even though the average burglary only costs about $1,300 (according to 2004 FBI crime reports), of which the criminal only nets a few hundred. Crimes like burglaries are the least ideal crime: they’re simple, detectable, perpetrated by a single or just a few people. They create an obvious victim and can’t be cloaked in rhetoric. Instead, what you should aim for is to steal a little bit of money from as many people as possible—little, old or otherwise — it doesn’t matter, as long as you don’t reverse the fortune of any one individual. After all, when lots of individuals suffer just a bit, people won’t mind as much.
So, what is the ideal crime? Which activity is difficult to detect, involves many people, has plausible deniability, can be supported by an ideology and affects many people just a bit? Yes, I think you know the answer, and it does involve banks…
Seriously, what we have here is a problem with our priorities. We have tremendous regulations for what is legal and illegal in the domain of possessions and blue-collar crime. But, what about regulations in banking? It is not that I really think that bankers plan and plot crimes for a living (I don’t), but I do think they are continuously faced with tremendous conflicts of interests, and as a consequence they see reality in a way that fits their own wallets and not their clients. The recent turmoil in the market is just a symptom of this conflict of interest problem, and unless we remove conflicts of interests from the banking system, we are going to be part of a long stream of perfect crimes.
This blog post first appeared on a website for a new PBS show called Need To Know
The Upside of Irrationality is out…
This is an exciting day (but also a bit nerve-racking).
After a lot of hard work, “The Upside of Irrationality” is finally out, and now all I can do is to stand by and see how people react to it. The Upside of Irrationality covers different topics from Predictably Irrational, but it is also much more personal (hence the nerve-racking part).
Here is a short intro to this book, and if you end up reading it, let me know what you think:
I am going to be on a book tour for a few weeks, and here is a list of places that I will give talks at:
NEW YORK
TUESDAY, JUNE 1; 7:00 pm
B&N
2289 Broadway at 82nd Street
SAN FRANCISCO
THURSDAY, JUNE 3, 7:30PM
Berkeley Arts & Letters
Hillside Club
2286 Cedar Street
Berkeley CA 94709
MONDAY JUNE 7, 7:30PM
The Booksmith
1644 Haight Street
San Francisco CA 94117
SEATTLE
TUESDAY JUNE 8, 7:30PM
Town Hall Seattle
1119 8th Avenue
Seattle, WA
WEDNESDAY, JUNE 9, Noon
Chamber of Commerce lunch
Rainier Square Conference Center
5th and University
BOSTON
THURSDAY, JUNE 10, 6:00PM
Brattle Theater
40 Brattle Street
Cambridge
WASHINGTON, D.C.
SATURDAY, JUNE 12, 3:30PM
Politics & Prose
5015 Connecticut Avenue NW
ST. LOUIS
MONDAY, JUNE 14, 7:00PM
St. Louis County Library
1640 S. Lindbergh Blvd.
CHARLOTTE NC
TUESDAY, JUNE 15, 7:00PM
Joseph Beth Booksellers
4345 Barclay Downs Dr.
DURHAM NC
MONDAY, JULY 19, 7:00PM
Regulator Bookstore
720 9th Street
Durham, NC
A Focus on Marketing Research
When businesses want to find answers to questions in marketing, whom do they ask? Do they set up experiments to test their ideas, pitting the approach they think is most effective against alternatives? Do they survey consumers on a large scale? Do they go to experts who have questioned and requestioned their theories? Surprisingly, the answer is no. Most often, businesses rely on small “focus groups” to answer big questions. They rely on the intuition of about 10-12 lay people with no relevant training who ultimately have no idea what they’re talking about.
I wonder how can this be a useful strategy? Why ask those who are lacking any kind of proficiency when, by definition, experts are more knowledgeable on the topic and have experience that could actually be beneficial? And even if experts are more narrowly focused, and tunneled vision, how can this be better than carrying out their own research?
Research in psychology and behavioral economics has shown time after time that people have bad intuitions. We are very good at explaining our behavior (sometimes shocking and irrational), and to do so we create neatly packaged stories – stories that may be amusing or provocative, but often have little to do with the real causes of our behaviors. Our actions are often guided by the inner primitive parts of our brain – parts that we can’t consciously access — and because of that we don’t always know why we behave in the ways we do; still, we can compensate for this lack of information by writing our own versions. Our highly sophisticated prefrontal cortex (only recently developed, by evolutionary standards) takes the reigns and paints a perfect picture to explain what we don’t know. Why did you buy that brand of fabric softener? Of course, because you love the way it makes your clothes smell like a springtime breeze when you pull them out of the warm dryer.
So, why do businesses go to our imagination when we know it’s just a cover for what’s really going on? Indeed, why do businesses go to the imaginations of a group of people to find real answers? I suspect that the story here is linked to another one of our irrationalities: As human beings, we have an insatiable need for a story. We love a vivid picture, a penetrating example, an anecdote that will stay in our memories. Nothing beats the feeling of knowledge we get from a personal story because stories make us feel connected – they help us relate. Just one example of customer satisfaction has a stronger emotional impact than a statistic telling us that 87% of customers prefer product A over product B. A single example feels real, where numbers are cold and sterile. Although statistics about how a large group of people actually behave can tell us so much more than the intuitions of a focus group, the allure of a story is irresistible. Our inherent bias to prefer the story compels us to believe in the worth of small numbers, even when we know we shouldn’t.
This “focus group bias” is not just a waste of money it is also most likely a waste of resources when products are designed according to the “information” gathered from these focus groups. We need to find a way to base our judgments and decisions on real facts and data even if it seems lifeless on its own. Maybe we should try and supplement the numbers with a story to quench our thirst for an anecdote, but what we can’t do is forget about the facts in favor of fairy tales. In the end, the truth lies in empirical research.
—
Arming the Donkeys is back…
After a short break, my podcast (Arming the Donkeys) is back..
Shaving, Squash, and my birthday
The paperback version of PI is out…
As of today the paperback version of PI is out.
I took out the parts about the stock market, and added 2 new chapters: one about the effects of social norms, and one about the cycle distrust in marketers and markets.
Sadly I cannot distribute these digitally, but if you are every in a book store or a library (or if you think that this worth $10) ….
Irrationally yours
Dan
intrinsic motivation
Jeff Monday has a unique talent of taking topics and explaining them in a simple graphical way. here is his approach of describing intrinsic motivation.
Thanks again Jeff…
Why Businesses Don’t Experiment
A few years ago, a marketing team from a major consumer goods company came to my lab eager to test some new pricing mechanisms using principles of behavioral economics. We decided to start by testing the allure of “free,” a subject my students and I had been studying. I was excited: The company would gain insights into its customers’ decision making, and we’d get useful data for our academic work. The team agreed to create multiple websites with different offers and pricing and then observe how each worked out in terms of appeal, orders, and revenue.
Several months later, right before we were due to go live, we had a meeting about the final details of the experiment—this time with a bigger entourage from marketing. One of the new members noted that because we were extending differing offers, some customers might buy a product that was not ideal for them, spend too much money, or get a worse deal overall than others. He was correct, of course. In any experiment, someone gets the short end of the stick. Take clinical medical trials, I said to the team. When testing chemotherapy treatments, some patients suffer more so that, down the road, others might suffer less. I hoped this put it in perspective. Fortunately, I said, price testing household products requires far less suffering than chemo trials.
But I could tell I was losing them. In a sense, I was impressed. It was a beautiful human sentiment they were conveying: We care about all customers and don’t want to treat any one of them unfairly. A debate ensued among the group: Are we willing to sacrifice some customers “just” to learn how the new pricing approaches work?
They hedged. They asked me what I thought the best approach was. I told them that I was willing to share my intuition but that intuition is a remarkably bad thing to rely on. Only an experiment gives you the evidence you need. In the end, it wasn’t enough to convince them, and they called off the project.
This is a typical case, I’ve found. I’ve often tried to help companies do experiments, and usually I fail spectacularly. I remember one company that was having trouble getting its bonuses right. I suggested they do some experiments, or at least a survey. The HR staff said no, it was a miserable time in the company. Everyone was unhappy, and management didn’t want to add to the trouble by messing with people’s bonuses merely for the sake of learning. But the employees are already unhappy, I thought, and the experiments would have provided evidence for how to make them less so in the years to come. How is that a bad idea?
Companies pay amazing amounts of money to get answers from consultants with overdeveloped confidence in their own intuition. Managers rely on focus groups—a dozen people riffing on something they know little about—to set strategies. And yet, companies won’t experiment to find evidence of the right way forward.
I think this irrational behavior stems from two sources. One is the nature of experiments themselves. As the people at the consumer goods firm pointed out, experiments require short-term losses for long-term gains. Companies (and people) are notoriously bad at making those trade-offs. Second, there’s the false sense of security that heeding experts provides. When we pay consultants, we get an answer from them and not a list of experiments to conduct. We tend to value answers over questions because answers allow us to take action, while questions mean that we need to keep thinking. Never mind that asking good questions and gathering evidence usually guides us to better answers.
Despite the fact that it goes against how business works, experimentation is making headway at some companies. Scott Cook, the founder of Intuit, tells me he’s trying to create a culture of experimentation in which failing is perfectly fine. Whatever happens, he tells his staff, you’re doing right because you’ve created evidence, which is better than anyone’s intuition. He says the organization is buzzing with experiments.
And so is that consumer goods company. A group there is studying consumer psychology and behavioral economics and is amassing evidence that’s impressive by any academic standard. Years after our false start, they’re recognizing the dangers of relying on intuition.
Creating God in Our Own Image
Question: what are God’s views on affirmative action, the death penalty and same-sex marriage? Answer: whatever you want them to be.
That’s according to a recent study by Nicholas Epley, Benjamin Converse, Alexa Delbosc, George Monteleone and John Cacioppo, found that we tend to ascribe our own views to God.
Past studies have shown that when we reason about other people, we form an opinion of their views based on two sources: egocentric info (i.e., what we ourselves believe) and outside clues (what the other person has said and done, and what others have said about them).
Here, the researchers wanted to find out how much we rely on egocentric info to construe other people’s views, including God’s. To that end, they had devout American participants provide their personal views on various issues (abortion, death penalty, Iraq war, etc.), as well as what they thought were the views of others (Katie Couric, George Bush, the average American, God, etc.).
When the researchers compared participants’ personal views with the participants’ estimates of others’ views, they found one significant pattern: there was a correlation between participants’ personal views and their estimates of God’s view. For example, participants who said they were for same-sex marriage tended to also say that God was for same-sex marriage. And participants who said they were against same-sex marriage tended to also say that God was against same-sex marriage.
But this wasn’t the case for the other figures – Couric, Bush, average American, and so forth. Participants who said they were for same-sex marriage were statistically neither more nor less likely to say that Couric was for same-sex marriage than those who held the opposite view. In other words, what I say Couric thinks has nothing to do with what I myself think. But what I say God thinks has lots to do with what I myself think.
But correlation doesn’t imply causation, so to shed light on the direction of causality, the researchers ran two follow-up experiments. This time, instead of just surveying participants for current views, they induced participants to change their personal views by randomly assigning them to give speeches for or against the issue (death penalty) in front of a camera. Because it was random assignment, some people ended up arguing for their personal view, while others argued against it (many past studies have shown that in this context, people tend to shift their own opinions in a direction consistent with the speech they delivered). So, what about the other views (God’s, Couric’s etc.) – would the participant revise those as well?
Yes and no. The only other view that changed was God’s. As participants’ own views changed, so did their estimates of God’s view. The participant who started out very much for the death penalty but took on a more moderate view after arguing against the death penalty on camera also ascribed a more moderate view to God. But his estimates of the others’ views remained unchanged.
Overall these results suggest that God is a blank slate onto which we project whatever we choose to. We join religious communities that argue for our viewpoint and we interpret religious readings to support our personal positions.
Irrationally Yours,
Dan
p.s and happy birthday to my little sister Tali