DAN ARIELY

Updates

The (un-)expected link between the human and the artificial mind

September 25, 2019 BY Dan Ariely

Understanding the human mind is key to the better design of artificial minds.

A short report based on a paper by Darius-Aurel Frank, Polymeros Chrysochou, Panagiotis Mitkidis, and Dan Ariely

Technology around us is becoming smarter. Much smarter. And with this increased smartness come a lot of benefits for individuals and society. Specifically, smarter technology means that we can let technology make better decisions for us and let us enjoy the outcome of good decisions without having to make the effort. All of this sounds amazing – better decisions with less effort. However, one of the challenges is that decisions sometimes include moral tradeoffs and, in these cases, we have to ask ourselves if we are willing to allocate these moral decisions to a non-human system.

One of the clearest cases for such moral decisions involves autonomous vehicles. Autonomous vehicles have to make decisions about what lane to drive in or who to give way at a busy intersection. But they also have to make much more morally complex decisions – such as choosing whether to disregard traffic regulations when asked to rush to the hospital or to select whose safety to prioritize in the event of an inevitable car accident. With these questions in mind it is clear that assigning the ability to make decisions for us is not that easy and that it requires that we have a good model of our own morality, if we want autonomous vehicles to make decisions for us.

This brings us to the main question of this paper: how should we design artificial minds in terms of their underlying ethical principles? What guiding principle should we use for these artificial machines? For example, should the principals guiding these machines be to protect their owners above others? Or should they view all living creatures as equals? Which autonomous vehicles would you like to have and which autonomous vehicles would you like your neighbor to have?

To start examining these kinds of questions, we followed an experimental approach and mapped the decisions of many to uncover the common denominators and potential biases of the human minds. In our experiments, we followed the Moral Machine project and used a variant of the classical trolley dilemma – a thought experiment, in which people are asked to choose between undesirable outcomes under different framing. In our dilemmas, decision-makers were asked to choose who should an autonomous vehicle sacrifice in the case of an inevitable accident: the passengers of the vehicle or pedestrians in the street. The results are published under open-access license and are available for everyone to read for free at: https://www.nature.com/articles/s41598-019-49411-7.

In short, what we find are two biases that influence whether people prefer that the autonomous vehicle sacrifices the pedestrians or the passengers. The first bias is related to the speed with which the moral decision was made. We find that quick, intuitive moral decisions favor killing the passengers – regardless of the specific composition of the dilemma. While in deliberate, thoughtful decisions, people tend to prefer sacrificing the pedestrians more often. The second bias is related to the initial perspective of the person making the judgment. Those who started by caring more about the passengers in the dilemma ended up sacrificing the pedestrians more often, and vice versa. Interestingly overall and across conditions people prefer to save the passengers over the pedestrians.

What we take away from these experiments, and the Moral Machine, is that we have some decisions to make. Do we want our autonomous vehicles to reflect our own morality, biases and all or do we want their moral outlook to be like that of Data from Star Track? And if we want these machines to mimic our own morality, do we want that to be the morality as it is expressed in our immediate gut feelings or the ones that show up after we have considered a moral dilemma for a while?

These questions might seem like academic philosophical debates that almost no one should really care about, but the speed in which autonomous vehicles are approaching suggests that these questions are both important and urgent for designing our joint future with technology.