The trolley problem and the evolution of war

The trolley problem is a famous thought experiment in philosophy, and runs something like this.

A runaway trolley is hurtling down a track. Five people have been tied to the track directly in front of the trolley, but there is a switch which allows the trolley to be diverted to an alternative track where one person has been tied to the track. You are standing at the switch and see the disaster unfolding. What do you do?  Most people answer that they would flip switch, killing one to save five – classic utilitarian thinking. The trolley problem has been embellished in a variety of interesting ways. The most famous of these is called the fat man problem: 5 people are tied to the track as before, but now there is a fat man on a bridge over the track, and if you push him off, he will fall before the train, stopping it and saving 5 people as before; of course, the fat man dies in the process. People who were willing to pull the switch to save 5 people tend to be reluctant to push the fat man off the bridge. Philosophers suggest that this reluctance is based upon deontological thinking, where one’s deep-seated values determine one’s actions rather than cool rational thought.

Josh Greene moved the field forward when he presented the trolley problem and related moral dilemmas to people while scanning their brains using fMRI, finding that

brain regions associated with abstract reasoning and cognitive control (including dorsolateral prefrontal cortex and anterior cingulate cortex) are recruited to resolve difficult personal moral dilemmas in which utilitarian values require “personal” moral violations, violations that have previously been associated with increased activity in emotion-related brain regions. Several regions of frontal and parietal cortex predict intertrial differences in moral judgment behavior, exhibiting greater activity for utilitarian judgments. We speculate that the controversy surrounding utilitarian moral philosophy reflects an underlying tension between competing subsystems in the brain.

What Josh concludes is that the reluctance to push the fat man – the deontological position – is really one that is brought on by the proximity: the act is up close and personal rather than disembodied by the technology of the switch on the track. As a result, emotional circuitry in our brains influence our decision-making in ways that are just plain different than when we have our hands on a mechanical switch.

Despite the elegance of both the dilemmas and Josh’s data, I have always had a bit of a worry about ‘trolleylology’ – it just doesn’t seem likely that I am ever going to encounter a fat man poised on a bridge with a runaway trolley hurtling down a track below me. So what people say they might do in that situation is interesting, but what does it have to do with the real world?

I have recently revised my thinking on this issue. What brought me around was a debate that has broken out over the legality of presidential authority to use drones in Libya. In highly charged testimony, State Department legal adviser Harold Koh argued that the president has the requisite authority. What interests me is not the legal nuance but rather one of the four reasons he gave for the position: exposure of US armed forces is limited. That got me to thinking about the evolution of war. Suddenly, the trolley problem seemed much more relevant to the real world. Consider this.

In early human evolution, we might have used a club as a tool to cudgel an adversary (an enhancement over fists, one might even say!). Eventually, a spear allowed one to fell one’s enemies from a modest distance, and this led to bows and arrows, and then guns and then bombs, and now predator drones. With each technological development, the risk to oneself was diminished, and so was the distance from the up-close-and-personal nature of the conflict. This phenomenon is taken to its logical extreme with drones, where the operators are sitting in a room far away (in the photo below, they are in Fargo, North Dakota) while the drones are killing opponents in Libya, Pakistan, or other far-off places.

While the developers of the drones were clearly interested in sparing the lives of soldiers (at least allied soldiers), it seems unlikely that they ever considered the psychological effects that the distance has upon their responsiveness. If trolleyology has any predictive power, it would predict this: it is far easier to pull the trigger in North Dakota than if the pilot was closer to the battlefield. But that is still a bit like pulling the switch to move the train from one track to the other. At level of decision makers, the distance between trigger and killing machine grows yet again, and the data would predict that utilitarian judgements – cold, hard calculations devoid of what are often described as moral sentiments – will become even stronger. And so it seems to be, as the administration even uses that distance to provide new justification for distinguishing between conflict that falls under the umbrella of war and that which does not. I think that in some important ways this suggests that Josh is right – it is not the utilitarian versus deontological distinction that matters, but rather what parts of our brains are engaged when we consider the issue. Cool calculus is much easier when soldiers are not being put in harm’s way.

For another take on drones and the trolley problem, see this post by Jonah.

Trolley image with standing man from Uncyclopedia

Drone image from Fast Company

Drone pilots from MPR

Cartoons of the trolley problem by John Holbo

Advertisement

7 thoughts on “The trolley problem and the evolution of war

  1. This topic is gaining interest in the field of roboethics, and I think the way you characterize the problem of distance in warfare is poignant. A great deal of time and money has been spent trying to increase the “effectiveness” of soldiers over the past half-century. Studies (unfortunately I don’t have a reference off-hand) have shown that soldiers intentionally miss their targets quite regularly when they’re firing rifles, and so the push has been to move the “triggers” out of the hands of front-line soldiers and into the hands of far off ones.

    The latest push is to move the triggers into the hands of robots. The drones you describe are controlled by humans–the humans make targeting and firing decisions, even if the drones are in complete control of the flight dynamics. But there is a move in the Artificial Intelligence (AI) community to automate targeting and firing decisions. Noel Sharkey has written about this, as have Wendell Wallach and Colin Allen (links to their websites at my Blog, http://jasonmillar.ca/ethicstechnologyandsociety ).

    Neuroethicists and roboethicists alike ought to be arguing about whether or not we need to model the kind of “distance from the trigger” psychological factors into AI that makes targeting and firing decisions. After all, I suspect that some moral psychological features are constitutive of human morality in some fundamental way, that is to say, without them we would not really be “human” in some fundamental sense of the term. If its human morality we are concerned with preserving in our autonomous machines, I agree with you that we might not want to be too quick to jettison the findings trolley problem.

  2. Pingback: Roboethics, Drones, and the Trolley Problem | Ethics, Technology, and Society

  3. If trolleyology has any predictive power, it would predict this: it is far easier to pull the trigger in North Dakota than if the pilot was closer to the battlefield.”

    I’d argue the opposite, that a man in the field is more likely to pull the trigger because of threat, fear, nerves and more apt to respond with an act of self-preservation; the man in North Dakota feels none of that. The man in the field, with all his senses, is responding to his environment and his perception of it, whereas the detached man in North Dakota sees only a slice.

  4. Pingback: Is “House of Cards” Running a Morality Experiment? | The Narrative Leader

  5. Pingback: El Problema del Tranvía ó... “¿Mato al gordo?” - Naukas

  6. Pingback: Neuroeconomics and Reinforcement Learning: The Concept of Value in the Neuroscience of Morals – Bioethics Research Library

Comments are closed.