Retribution, the dictionary tells us, is the dispensing of punishment for misdeeds. Derived from the Latin re tribuere, it literally means to pay back. We humans have strong retributive instincts, and it is often said that this behaviour arose as a product of our evolution as social beings: the threat of retribution enforces social norms, and was among the features that increased the likelihood of cooperation among members of society in the early years of human evolution. Given that cooperation confers significant adaptive advantages to the group, retributive norms flourished, and whether via genes or enculturation, the desire for retribution has been passed on to us.

The value of retributive impulses in the modern world is more difficult to discern. We humans are noted for having the ability to not only act in a manner that is instinctive, but also to reflect upon the propriety of our actions. Amongst philosophers, retribution is often contrasted with consequentialism, the notion that the response to a misdeed should produce the best result for society. Consequentialism is possible because the modern human brain is able to reason, and by so doing we are able to anticipate near, medium and long-term futures: we can decide whether the best response to a misdeed is retribution, education, or even doing nothing at all. At different times, different responses may be called for. What matters most to the consequentialist is not the payback but rather the outcome. The tension between retribution and consequentialism is a hot button issue in the field of neurolaw, where neuroessentialists argue that it is time to rethink the concept of punishment, while traditionalists suggest that deterrence remains the best way of organizing civil society. Continue reading

Rehabilitation, not retribution

My research group has been spending a great deal of time recently discussing responsibility, especially in light of our neuroessentialist perspective.  The germ of the idea is this: everything that we do, every decision that we make is dependent upon the functioning of our brains. Moreover, the entire process is dependent upon the particular details of our brains’ neurochemistry, be it caused by our genetic heritage or our life experience. In fact, at the level of the synapse, the source of the neurochemical arrangement is probably irrelevant, and nature and nurture collapse into synaptic function. No voodoo. No mystery. Just chemistry.

We certainly recognize that such neuroessentialist thinking can be unnerving, and there is even data that suggests that such thinking can increase asocial behaviour (here and here). But a line of reasoning, best enunciated in Josh Greene and Jonathan Cohen’s highly influential paper on neuroscience and the law suggests that it is time to rethink our collective attitudes towards responsibility, especially when we think about how to deal with criminal behaviour. As David Eagleman suggests, perhaps it is time to use our impressive understanding of the human brain to find better ways to rehabilitate criminals rather than punish them.

It turns out that Norway is way ahead. Continue reading

Day 1: Neurolaw, Deception, Genetics, and a Little bit of Magic

“Reality is merely an illusion, albeit a very persistent one.”

– Albert Einstein.

The opening lecture of brain_musicianthe conference fell under the heading of “Dialogues between Neuroscience and Society” and was given by magicians – or rather illusionists – Eric Mead and Apollo Robbins. The Dialogues series began about 5 years ago for neuroscientists to engage with professionals outside of the neuroscience community to discuss how their work intersects with some of the work that occurs in the brain sciences. Other lecturers in years past have included esteemed individuals such as the Dalai Lama. In two separate lectures, Mead and then Robbins demonstrated how illusionists – and sophisticated pick-pocketers – use principles of psychology and deception to achieve their goals. In many ways, illusionists hijack the cognitive capacities of their targets. Indeed, this practice (in non magician-show settings) may be ethically problematic as the techniques employed make use of deception, manipulation, and the planting of false memories.

The remainder of the day was devoted to poster viewing. I was mainly interested in the viewing posters from Theme H: History, Teaching, Public Awareness, and Societal Impacts in Neuroscience. Although these posters were condemned to the back walls of the McCormick Center, I was pleased to see how many posters of this category were on display today and that SfN continues to support these important issues.

There were a few posters that particularly caught my eye, and I was able to engage in some interesting discussion with the presenters:

1. Responsibilities of Neuroscience Concerning Aggressive War and Torture – Curtis C. Bell. The poster outlined some of the familiar arguments regarding the use of neuroscience in military activities (see for instance: here and here). In particular, Bell argued that the SfN ought to take a stance and declare an opposition to “aggressive” war and torture in many ways similar to the statements made by such groups as the American Anthropological Association.

2. Neuroscience, Reason, and Emotion in Legal Decision-Making – Chris Buccafusco. Buccafusco explored the implications of affective neuroscience for the law, particularly Damasio’s Somatic Marker Hypothesis. Although his conclusions were somewhat unclear, he stated that the law ought to focus on the role of empathy in jury judgments of pain and suffering damages.

3. Beyond the Brain: Addiction as a Human Experience (This was the old title on the poster – the title was in reference to Biobanks at the Mayo Clinic) – Lefebvre, Maclean, Robinson, McCormick & Koenig. Jennifer McCormick was the presenter of the poster. The poster reported on a study that sought to explore subjects’ hopes, fears, intentions and expectations in the context of genetic research in addiction. In particular, the authors were interested in study participants conceptions of the informed consent process where they donated samples of their DNA for a biobanking project. Results of the study, interestingly enough, didn’t focus on the informed consent process – participants reported their understanding of a “disease-of-the-brain” construct of addiction and how it related to something that was “in their genes”. Thus, participants believed that a biological conception of their illness would allow for more treatments or “cures” of their condition. McCormick also reported, however, that participants perceived their addiction as “multi-faceted” and looked to psycho-social factors as other ways to explain their addiction.

Going out to hear some Chicago Blues tonight — will report tomorrow with more from SfN, including highlights from the Social Issues Roundtable.

Neuroscience, Free Will, and Selfhood – a National Core for Neuroethics Journal Club

In keeping with our new endeavor to summarize and present the Core’s (usually) weekly journal club discussions, here is the abstract of Kaposy, “Will Neuroscientific Discoveries about Free Will and Selfhood Change our Ethical Practices?” from Neuroethics (2009).

Over the past few years, a number of authors in the new field of neuroethics have claimed that there is an ethical challenge presented by the likelihood that the findings of neuroscience will undermine many common assumptions about human agency and selfhood. These authors claim that neuroscience shows that human agents have no free will, and that our sense of being a “self” is an illusory construction of our brains. Furthermore, some commentators predict that our ethical practices of assigning moral blame, or of recognizing others as persons rather than as objects, will change as a result of neuroscientific discoveries that debunk free will and the concept of the self. I contest suggestions that neuroscience’s conclusions about the illusory nature of free will and the self will cause significant change in our practices. I argue that we have self-interested reasons to resist allowing neuroscience to determine core beliefs about ourselves.

Will neuroscience find itself in the predicament of Cassandra?

An additional catalyst for conversation was a very similar paper by Roskies, “Neuroscientific Challenges to Free Will and Responsibility” from Trends in Cognitive Sciences (2006), whose abstract is as follows:

Recent developments in neuroscience raise the worry that understanding how brains cause behavior will undermine our views about free will and, consequently, about moral responsibility. The potential ethical consequences of such a result are sweeping. I provide three reasons to think that these worries seemingly inspired by neuroscience are misplaced. First, problems for common-sense notions of freedom exist independently of neuroscientific advances. Second, neuroscience is not in a position to undermine our intuitive notions. Third, recent empirical studies suggest that even if people do misconstrue neuroscientific results as relevant to our notion of freedom, our judgments of moral responsibility will remain largely unaffected. These considerations suggest that neuroethical concerns about challenges to our conception of freedom are misguided.

Our participants often bring in relevant quotes from other sources. In this case, one contentious feature of the article was Kaposy’s rather unfavorable forecast regarding the extent to which neuroscientific perspectives will impact humanity’s self-understanding – that “our social norms are stronger determinants of what we believe than any esoteric field of science.” A passage from Steven Pinker’s book The Blank Slate suggests that Messrs. Newton, Darwin, and Mendel, among others, would appreciate a brief word with Kaposy:

Newton’s theory that a single set of laws governed the motions of all objects in the universe was the first event in one of the great developments of human understanding: the unification of knowledge, which the biologist E. O. Wilson has termed consilience. Newton’s breaching of the wall between the celestial and the terrestrial was followed by a collapse of the once equally firm (and now equally forgotten) wall between the creative past and the static present. That happened when Charles Lyell showed that the earth was sculpted in the past by forces we see today (such as earthquakes and erosion) acting over immense spans of time.

The living and the nonliving, too, no longer occupy different realms. In 1628 William Harvey showed that the human body is a machine that runs by hydraulics and other mechanical principles. In 1828 Friedrich Wohler showed that the stuff of life is not a magical, pulsating gel but ordinary compounds following the laws of chemistry. Charles Darwin showed how the astonishing diversity of life and its ubiquitous signs of design could arise from the physical process of natural selection among replicators. Gregor Mendel, and then James Watson and Francis Crick, showed how replication itself could be understood in physical terms.

The unification of our understanding of life with our understanding of matter and energy was the greatest scientific achievement of the second half of the twentieth century. One of its many consequences was to pull the rug out from under social scientists like Kroeber and Lowie who had invoked the “sound scientific method” of placing the living and the nonliving in parallel universes …

This leaves one wall standing in the landscape of knowledge, the one that twentieth-century social scientists guarded so jealously. It divides matter from mind, the material from the spiritual, the physical from the mental, biology from culture, nature from society, and the sciences from the social sciences, humanities, and arts. … this wall, too, is falling.

When considered in these terms, the potential impact of advancing knowledge in neuroscience may indeed appear more impressive. Of course, how can we know for sure until after the fact? Presumably we cannot; as one journal clubber observed, Kaposy may be right to challenge the future-tense, indicative-mood certainty of, e.g., Tancredi (2005) or Greene and Cohen (2004), but this does not justify his own certainty to the contrary; many of Kaposy’s predictions are as stark as those he takes to task.

Further discussion focused on Kaposy’s discussion and subsequent (largely implicit) dismissal of cognitive polyphasia as an explicitly adopted method for simultaneously entertaining the otherwise conflicting perspectives of neuroscience and pre-theoretic human understanding. Opinion was mixed; some identified the tactic as precisely what they have employed all along to get by as both scientists and human beings, or characterized the idea – which consists of holding contradictory ideas in different contexts, essentially maintaining multiple ‘belief-boxes’ for different cognitive applications – as a “clean-burning alternative” to more exclusivity-oriented views of scientific truth versus phenomenological truth.  Others worried that a move toward cognitive polyphasia dispenses too easily with a basic standard for consistency of belief. Moreover, it was suggested that polyphasia may well turn out to be impossible to maintain for such deep beliefs as free will and selfhood, especially if we try to begin thinking of dearly valued relationships – especially familial ones – as “robotic” in some sense.

To editorialize briefly, I had a problem with Kaposy’s defense of compatibilism on logical grounds. If, as we all seem to agree the case is alleged to be, neuroscience has placed both the notions of free well and a deep self under conceptual fire, then we cannot leave one out in the open while defending the other. But the author, in explaining compatibilism, uses language that is completely rife with I, me, and my. Essentially, the intelligibility of compatibilism in this case is parasitic on a notion of selfhood which in turn is indicted by the very perspective that compatibilism is trying to placate. (If this sounds familiar, it’s because I’ve been on a big Quine kick all week, and the general form of the critique is straight out of Two Dogmas.) This does not necessarily make compatibilism a circular concept, but unless Kaposy is able to shore up selfhood without making some appeal to free will, his compatibilist option makes for a poor argumentative strategy.

As always, we invite commentary and discussion from our readers.

Neuroscience and the Ethics of Coercive Interrogation

071105_watertortureThe issue of using tactics of ‘coercive interrogation’ – and by extension torture – to extract information from individuals is a long-standing, unsettled, and complicated debate. Torture, in particular the ethics of torture, has come under a more scrutinized focus in the wake of 9/11, the “war on terror”, and more recently as public attention was drawn to the abuses occurring at Abu Ghraib. Currently, torture is prohibited under International Law, yet it is believed that many countries around the world still continue to implement torture as a means of obtaining sensitive information, particularly as it relates to information around acts of terrorism.

The ethics of ‘coercive interrogation’ or ‘torture’ (note: this post is not concerned whether or not there is a distinction, or even a moral distinction, between ‘coercive interrogation’ and ‘torture’. This post assumes that they are the same. For a broader discussion, see Canada’s leader of the Official Opposition, Michael Ignatieff and his Lesser Evil argument) is often divided between deontological and utilitarian or consequentialist arguments. For example, claiming torture as a violation of human rights has roots in deontology, while others may justify torture by claiming that many lives have been saved as a result of coercive interrogation techniques, and this stems from utilitarian thinking.

In a recent early access article* in Trends in Cognitive Science, Shane O’Mara of Trinity College in Dublin raises an objection to torture, but this time the objection lies on scientific grounds. O’Mara reports that in using coercive interrogation techniques a great assumption is made: the extreme stress, anxiety, and shock of torture and interrogation tactics have no impact on the brain and its memory systems. O’Mara demonstrated that this incorrect assumption was actually made by the previous Bush Administration. In fact, O’Mara states, the philosophy behind the interrogations was “based on the idea that repeatedly inducing shock, stress, anxiety, disorientation and lack of control is more effective than standard interrogatory techniques in making suspects reveal information.”  Thus, retrieving true knowledge through a means in which the subject is under extreme stress and anxiety will not occur, as the brain has changed significantly and may inhibit the processes contributing to long-term memory retrieval.

Memory is a complex system of cognition whereby malleable neural systems actively process and retain information, and reconstruct past experiences. As O’Mara outlines in his review, stress hormones such as cortisol and adrenaline inhibiting memory retrieval is not a new idea. He thus argues that harsh torture may motivate prisoners to ‘talk’, but the impact on memory retrieval may prevent content to be accurately revealed from long-term memory (and, as he rightly notes, they talk so they won’t be water-boarded). This important knowledge will no doubt further support the human rights argument against torture, as perhaps even more long-term harm to brain tissue and executive function may be inflicted on the prisoner as a result. However the larger question still remains: will this knowledge change our practices in emergencies or times of war? Or, for those interested in the implications of neuroethics, will our greater understanding of the brain actually separate the is from the ought?

This discussion brings to mind the “ticking time bomb” example, often raised in debates about the value of torture. The example goes something like this:

You are made aware of an bomb that is about to explode in the downtown of some large metropolis that will kill many people. The individual who knows how to defuse the bomb is in your custody and refuses to tell you the whereabouts of the bomb or how to diffuse it. The only way to extract the information from him or her is to torture him or her. Should the person be tortured?

Though some groups such as Amnesty International have attempted to “defuse” the argument, the emotional nature of the time-bomb scenario (e.g., “would you permit torture so that thousands of innocent people will be saved?) allows us to briefly delve into the moral psychology of the dilemma for a moment. Mara does, actually, refer to this time bomb problem in his paper. Emphasizing the neuroscience, he suggests “that torture is as likely to elicit false as well as true information, and that separating the one from the other will be very difficult.” True, this may be the case for individuals who are ‘coercivelly interrogated’ over long periods of time. But if we return to our time-bomb example, it is unlikely that the suspect will have difficulty recalling – assuming no major brain injury or trauma was incurred during the interrogation – such important information which has been deeply encoded in the brain as where the bomb may be (episodic memory), and how to diffuse it (procedural memory) over an extremely short and limited period of time (the bomb is ticking).

And so, regrettably, I am unconvinced or rather not so optimistic, of the larger question. Perhaps because of a fear-based culture of North America, torture, even in emergency or war-time scenarios, may still, by some, be deemed permissible – even in “ticking time-bomb” scenarios. Ideally I can only hope, but I am skeptical that the neuroscience knowledge described by O’Mara will have any impact on the ethics of torture.

*I couldn’t access the article from the Journal’s website, but found a non-proofed copy here.