The issue of using tactics of ‘coercive interrogation’ – and by extension torture – to extract information from individuals is a long-standing, unsettled, and complicated debate. Torture, in particular the ethics of torture, has come under a more scrutinized focus in the wake of 9/11, the “war on terror”, and more recently as public attention was drawn to the abuses occurring at Abu Ghraib. Currently, torture is prohibited under International Law, yet it is believed that many countries around the world still continue to implement torture as a means of obtaining sensitive information, particularly as it relates to information around acts of terrorism.
The ethics of ‘coercive interrogation’ or ‘torture’ (note: this post is not concerned whether or not there is a distinction, or even a moral distinction, between ‘coercive interrogation’ and ‘torture’. This post assumes that they are the same. For a broader discussion, see Canada’s leader of the Official Opposition, Michael Ignatieff and his Lesser Evil argument) is often divided between deontological and utilitarian or consequentialist arguments. For example, claiming torture as a violation of human rights has roots in deontology, while others may justify torture by claiming that many lives have been saved as a result of coercive interrogation techniques, and this stems from utilitarian thinking.
In a recent early access article* in Trends in Cognitive Science, Shane O’Mara of Trinity College in Dublin raises an objection to torture, but this time the objection lies on scientific grounds. O’Mara reports that in using coercive interrogation techniques a great assumption is made: the extreme stress, anxiety, and shock of torture and interrogation tactics have no impact on the brain and its memory systems. O’Mara demonstrated that this incorrect assumption was actually made by the previous Bush Administration. In fact, O’Mara states, the philosophy behind the interrogations was “based on the idea that repeatedly inducing shock, stress, anxiety, disorientation and lack of control is more effective than standard interrogatory techniques in making suspects reveal information.” Thus, retrieving true knowledge through a means in which the subject is under extreme stress and anxiety will not occur, as the brain has changed significantly and may inhibit the processes contributing to long-term memory retrieval.
Memory is a complex system of cognition whereby malleable neural systems actively process and retain information, and reconstruct past experiences. As O’Mara outlines in his review, stress hormones such as cortisol and adrenaline inhibiting memory retrieval is not a new idea. He thus argues that harsh torture may motivate prisoners to ‘talk’, but the impact on memory retrieval may prevent content to be accurately revealed from long-term memory (and, as he rightly notes, they talk so they won’t be water-boarded). This important knowledge will no doubt further support the human rights argument against torture, as perhaps even more long-term harm to brain tissue and executive function may be inflicted on the prisoner as a result. However the larger question still remains: will this knowledge change our practices in emergencies or times of war? Or, for those interested in the implications of neuroethics, will our greater understanding of the brain actually separate the is from the ought?
This discussion brings to mind the “ticking time bomb” example, often raised in debates about the value of torture. The example goes something like this:
You are made aware of an bomb that is about to explode in the downtown of some large metropolis that will kill many people. The individual who knows how to defuse the bomb is in your custody and refuses to tell you the whereabouts of the bomb or how to diffuse it. The only way to extract the information from him or her is to torture him or her. Should the person be tortured?
Though some groups such as Amnesty International have attempted to “defuse” the argument, the emotional nature of the time-bomb scenario (e.g., “would you permit torture so that thousands of innocent people will be saved?) allows us to briefly delve into the moral psychology of the dilemma for a moment. Mara does, actually, refer to this time bomb problem in his paper. Emphasizing the neuroscience, he suggests “that torture is as likely to elicit false as well as true information, and that separating the one from the other will be very difficult.” True, this may be the case for individuals who are ‘coercivelly interrogated’ over long periods of time. But if we return to our time-bomb example, it is unlikely that the suspect will have difficulty recalling – assuming no major brain injury or trauma was incurred during the interrogation – such important information which has been deeply encoded in the brain as where the bomb may be (episodic memory), and how to diffuse it (procedural memory) over an extremely short and limited period of time (the bomb is ticking).
And so, regrettably, I am unconvinced or rather not so optimistic, of the larger question. Perhaps because of a fear-based culture of North America, torture, even in emergency or war-time scenarios, may still, by some, be deemed permissible – even in “ticking time-bomb” scenarios. Ideally I can only hope, but I am skeptical that the neuroscience knowledge described by O’Mara will have any impact on the ethics of torture.
*I couldn’t access the article from the Journal’s website, but found a non-proofed copy here.