Dan Ariely tells Matthew Taylor why it's only by understanding our weaknesses that we can learn to anticipate and avoid mistakes
Matthew Taylor: The UK government has just set up a behavioural insight team, and behavioural economics has been subject to a surge of policy interest in recent years. What do you think has driven this trend?
Dan Ariely: Without the financial crisis, I don’t think behavioural economics would have gained the popularity it has. Almost everyone believed that the market was the most rational place on the planet, yet it failed in a magnificent way. This proved that people who deal with large amounts of money are as capable of irrationality – from reckless gambling to myopia and overconfidence – as anybody else.
In addition, over the years, behavioural economics has moved from the lab to the field. Early researchers, such as Daniel Kahneman and the late Amos Tversky, were working on theories of human behaviour using gambling. The application of their work to real life wasn’t immediately obvious to everyone. Now, however, we’re looking at what behavioural economics can teach us in a whole range of environments, from schools and kindergartens to banks and hospitals. We’ve shown that it matters in everyday situations, and in ways that almost everyone finds appealing.
MT: Can you give me some examples of these real-world implications?
DA: We've recently done a study looking at how people decide which loans to pay back. It turns out that, when people have multiple loans, they choose to pay back the small ones first, rather than the ones with the highest interest rate. Out of the thousands of people who took part in the study, not a single one chose the perfectly rational strategy. Here, there’s a clear opportunity to stop people from wasting money by encouraging them to make better financial decisions.
Another study we’re doing is about why people don’t look for second opinions in medicine. There’s an obvious financial motivation for doctors and dentists to overtreat people, which explains why, for instance, 75% of wisdom tooth extractions in the US are estimated to be unnecessary. Second opinions are one of the best remedies for this type of conflict of interest, because they enable patients to get advice from someone other than the person they are paying for treatment.
Conflicts of interest are a good example of the importance of behavioural economics. In standard economics, there’s no such thing as a conflict of interest: a doctor or dentist simply calculates the comparative benefits of giving bad advice and remaining honest. Yet what our study has shown is that, while many practitioners believe they’re always acting in the best interests of their clients, the reality is that they often see the world in terms that are more compatible with their own financial interest. They don’t realise how influenced they are by conflicts of interest, so they don’t even think that it’s important for them to fight against these forces.
MT: Can you explain this gap in our self-knowledge in terms of a distinction between rationality and predictability?
DA: Absolutely. Rationality describes the idea that we all abide by certain laws of economic theory, whereas predictability describes our tendency to do the same thing over and over again. Certain emotions, such as hunger or sexual arousal, temporarily change us. Once triggered, they change us in a very predictable way, but also in a way that we don’t fully appreciate or anticipate. For example, in one experiment, we asked people how they would behave when they were sexually aroused; we then asked them the same questions when they were actually in a state of arousal. What we found was a vast difference in people’s predictions about how they would behave. For example, during arousal, people were much more open to the idea of having unprotected sex or sex with animals. This shows that we don’t anticipate how emotions will influence us, even though their influence is systematic and predictable.
MT: Isn’t there a danger that this research is a bit like Schrödinger’s Cat: once you know the problem exists, it ceases to exist?
DA: This is possible, but it does not seem to be the case. However well you understand the complexity of choice, and the trade-offs it involves between short- and long-term gains, it’s hard not to make certain mistakes. For instance, many US states prohibit people from using their mobile phones to send text messages while they are driving, yet accident rates have actually gone up in these states because the new law has prompted people to begin texting below rather than above the wheel. We all understand the dangers of doing this, but when we feel our phone vibrate, we’re tempted to act against our better judgement.
MT: So what does all this mean for policy? There’s a whole range of approaches that you can take to tackle these problems. At one extreme, you can assume that people cannot be trusted to make the right decisions, so you introduce top-down policies to counteract this; at the other extreme, you can believe that people understand their own frailties and, at most, you need to supply them with the tools to combat these problems. Where do you stand on this spectrum?
DA: I think all of these approaches are fine in principle. What’s important is to figure out the causes of each of our misbehaviours and find the ideal approach to combat them. What we basically need is evidence about which approaches are likely to work best in each of the cases.
When people think about this range of possible solutions, one question that comes up is about the extent of paternalism that we are comfortable with. Personally, I’m not against paternalism, but I think that the level should be based on public opinion. My view is that we should think about what kind of society we want to live in and then work out what we need to do in terms of limitations and regulations to achieve it. We don’t all need a neuroscientist’s level of understanding of why we make certain mistakes or act in certain ways, but we do need to attack the cases in which we agree that we’re not doing things the way we would ideally like to be doing them.
One of the areas in which I’m particularly paternalistic is the idea of getting people to do the right thing for the wrong reasons. Think about global warming: it’s the archetype of a problem that people don’t care about. We don’t see people directly suffering from it right now; it will affect other people before it affects us; and anything we do to help is a drop in the ocean. Every force that causes human apathy is combined into this one. If we can’t get people to care about the cause itself, however, perhaps we can get people to do the right thing for the wrong reasons.
It might be a case of posting people’s energy consumption on their Facebook page or the window of their house, or encouraging children to hassle their parents; anything that means people acquire the right habits.
MT: This is the kind of insight that drives me to reconsider social conservatism. Economic historian Avner Offer, for example, has argued that society creates certain ‘commitment devices’ – marriage, the welfare state, the church – that enable us to deal with our frailties. He goes on to claim that, when we became affluent in the 1960s and 1970s, we no longer needed these devices, which led to a situation in which we were richer but not happier. So, do you think institutions have a value in motivating people to make decisions that are better for them in the long term?
DA: These institutions have evolved over a long time and involve many clever features. If you spend £25,000 on a wedding in front of lots of people, for instance, you’re less likely to break off the relationship when things are not going as smoothly as you would hope. When we engage in an outcome that we think is irreversible, we often become committed to it and, by doing so, find ourselves enjoying it more.
Another interesting institution is the Catholic practice of confession. From a rational perspective, confession is a strange mechanism: after all, if you knew you could get absolved for cheating, you ought to be inclined to cheat more often, ideally on the way to the church to minimise the chances of dying without absolution. However, our experiments in cheating show a particular pattern: people start by cheating a little bit, because they’re trying to balance feeling good about themselves with benefiting from cheating, but after they’ve reached a certain point, they begin thinking of themselves as cheaters and, once this happens, they start cheating a lot. In this instance, confession really helps, as it gives people a chance to turn over a new page.
What this shows is that people inherently want to be honest – more honest, in fact, than traditional economic theory suggests – but once they start thinking of themselves as a bad person, there’s nothing to stop them from carrying on down that road.
MT: We’re talking about big problems and big institutions here. Does behavioural economics really have the power to provide solutions to these issues, or are we exaggerating the benefits that it can bring?
DA: Since behavioural economists create the conditions for their experiments, there’s an argument that they can manufacture effects to make them look bigger or smaller. As for whether this matters, you’d get a different opinion depending on whether you talked to a psychologist or an economist. Psychologists would say that they care about the process rather than the effect – what matters is putting something under a microscope and seeing how it works – but for economists and people who want to influence policy, the effect is important.
So, the question is how expensive these behavioural-based interventions are and how significant an impact they will have. Chicago economist John List recently did a six-month study in a factory in China to investigate how productivity changes as a function of whether the incentives are framed as gains or losses. He found out that the difference was equivalent to slightly more than one per cent a year. When you think about what that adds up to over 20 years, you realise that it’s hugely significant; in fact, it’s equivalent to the difference between the US and Ethiopia.
I encountered another example recently, when I visited a big drug company that is concerned about the number of diabetics who do not take insulin at mealtimes. The company has spent billions of dollars on improving the technology but, despite this, the compliance rate in the US is less than 30%, which means the current gap is mostly due to psychological barriers. Yet hardly any money has been spent on understanding the human motivation related to mealtime insulin and the reasons for this failure.
MT: One of the things that interests me is how all this relates to social networks. What do you think about the interface between your work and research into the way people influence one another?
DA: Social norms are constantly evolving and they can tell us a lot about how and why people make decisions. In one experiment, we gave students a chance to cheat publicly on a test and found that, once one student had cheated, several others would as well. This was only the case, however, if they thought that the cheating student was part of their ‘in-group’. What this reveals is that people are strongly influenced by what is socially acceptable misbehaviour within their own culture. Culture can take behaviour out of the general moral context and define it in a particular way. It’s likely that this is what happened in the MPs’ expenses scandal in the UK, when MPs started behaving in a corrupt way because they saw a few of their peers were doing it.
MT: In the end, is this just a way of adding bells and whistles to the neoclassical view of human nature? It’s a more complex structure of incentives, but you’re essentially saying that people are individualistically driven and that, if we can understand enough of what drives them, we can propose solutions. Is behavioural economics original, or is it just
a new term for an old approach?
DA: I think we needed a new term so that we could launch a convincing attack against neoclassical economics, which people in policy and business have relied on for a long time. Economics has become the most successful social science by being dogmatic and imperialistic; the implication is that an introduction to economics is all you really need to design policy. If economists were to admit that their theory of human nature only explains a small part of human behaviour and has to be studied in conjunction with other social sciences, I don’t think we would need to give such a specific label – behavioural economics – to our field.
MT: Do you believe that all these different scientists and economists can ever come together to produce an integrated account of human nature?
DA: No, I don’t think we’ll see an integrated account, but I believe there’s the potential to develop a single field of study that is effectively applied social science. Economists, sociologists and psychologists will always take different approaches, but when we come to implement a policy or business decision, I hope we’ll be able to combine all their inputs to create experiments that test a wide range of possible solutions in order to find out what really works the best.
The factors that affect human economic activity will remain many and complex, especially as we continue to change the world around us. Can we ever produce a single account of human nature? No. Can we get closer? I certainly hope so.
Dan Ariely is professor of psychology and behavioural economics at Duke University
No comments:
Post a Comment