Law school has swallowed my blogging efforts here for a while, but over at the Stanford Center for Law and the Biosciences blog I have a new post looking at transcranial electrotherapy stimulators in the context of FDA regulation.
Although the topic of cigarette packaging regulation may not leap immediately to mind when one thinks “neuroethics,” this Bob Greene opinion piece over at CNN nonetheless touched off a stimulating discussion among some of us at the Core recently. The neuroethics connection, in fact, struck us as quite natural: our group has researched (and blogged about) the ethics of “nudging” frequently of late, and, as I worded it when I first emailed the article around, “certainly the images at issue here are a kind of behavioural nudge.” The question that we grappled with was whether the kind of nudge that the graphic warning labels provide is warranted in the case of cigarettes. And, indeed, that discussion called my original characterization into question. Do these labels truly constitute a nudge – a subtle biasing technique that makes a particular option more cognitively accessible than another while preserving the freedom to choose between them – or are they something more akin to a “shove?”
As with any highly politicized issue, the question of whether cigarettes ought to be labeled with disturbing imagery is likely to be filleted into oblivion by pundits, bloggers, legal experts, economists, et cetera, et cetera. All I hope to do here, then, is sketch some ways in which the view from neuroethics – informed as it is by philosophy and the cognitive sciences – can shed some interesting and hopefully useful light on the question. Continue reading
The National Core for Neuroethics had a lively journal club discussion recently on a paper by Bertram Malle and Sarah Nelson that dealt with “the tension between folk concepts and legal concepts of intentionality.” As I was presenting the paper and facilitating the discussion, I decided to blog about it to share some of the highlights with our readers and crystallize my own thoughts on the matter, stirred up as they were by the proceedings.
The basic gist of the paper is as follows. Malle and Nelson identify “the valid and precise use of the concepts of mental states in reasoning about the defendant’s actions and in assigning responsibility, blame, and punishment” as a central challenge in creating a system of criminal adjudication. (One interesting point to consider going forward is how these same issues might apply to the context of torts, where instead of the epistemic criterion being “beyond a reasonable doubt” one is instead prompted to consider “the balance of evidence.”) In legal contexts, the term used to refer to the mental states in question is mens rea, Latin for “guilty/sinful mind.”
The specific mental state that the paper is concerned with is intention, especially as it relates to intentional action. In the grand tradition of experimental philosophy (though it really wasn’t yet a tradition in 2003!), Malle and Nelson find the by-now familiar faults with how these concepts have been developed in legal theory and philosophy – with theories of intentional action checked primarily against the intuitions of a small, non-representative group of participants in the debate, leading to a confusing mismatch between how the law asks us to use concepts, and how we (generally) are inclined to actually use them.
The paper explores the possibility of replacing the contorted and awkward legal concept of intentionality (here used as a synonym for intentional action) with a “folk” theory, one which Malle and a then-lesser-known Joshua Knobe had already undertaken to construct via several studies. Malle’s rationale is that, should the folk concept of intentionality prove to be consistent in its use and appropriate to the goals of law, it will be a natural successor to the status quo. Continue reading
Over at Pharyngula, PZ Myers’ reliably lively ScienceBlogs province, a recent post offers some incisive treatment of a philosophically arresting debate between Sam Harris and sundry interlocutors, most prominently Sean Carroll. The topic – whether science can answer moral questions, or, more perilously rendered, whether one can, in fact, derive “ought” from “is” – exerts a tidal attraction upon my blogging muscles, but I can resist for now; Myers’ relevance to this entry issues specifically from a choice bit of phraseology in his write-up.
When Harris claims that the discernment of human well-being (and hence of utility-maximizing courses of action) is a purely empirical matter, Myers finds him guilty of “smuggling in an unscientific prior in his category of well-being.” What I want to explore after the jump is the following possibility:
It may be that the developing body of work in neuroscience and psychology probing various morally charged phenomena has been smuggling in a politically loaded prior under the terminologically neutral guise of the category “prosocial behaviour.”
This week, the Core’s journal clubbers discussed a paper by Sarina Rodrigues et al., entitled “Oxytocin receptor genetic variation relates to empathy and stress reactivity in humans.” The abstract follows:
Oxytocin, a peptide that functions as both a hormone and neurotransmitter, has broad influences on social and emotional processing throughout the body and the brain. In this study, we tested how a polymorphism (rs53576) of the oxytocin receptor relates to two key social processes related to oxytocin: empathy and stress reactivity. Compared with individuals homozygous for the G allele of rs53576 (GG), individuals with one or two copies of the A allele (AG/AA) exhibited lower behavioral and dispositional empathy, as measured by the “Reading the Mind in the Eyes” Test and an other-oriented empathy scale. Furthermore, AA/AG individuals displayed higher physiological and dispositional stress reactivity than GG individuals, as determined by heart rate response during a startle anticipation task and an affective reactivity scale. Our results provide evidence of how a naturally occurring genetic variation of the oxytocin receptor relates to both empathy and stress profiles.
Our discussion focused on how best to interpret the findings. What can we really conclude from the correlation discovered in this study? Can we claim to have knowledge of “a gene for empathy” now? If not, what can we reasonably say about why some people are more empathetic than others?
In keeping with our new endeavor to summarize and present the Core’s (usually) weekly journal club discussions, here is the abstract of Kaposy, “Will Neuroscientific Discoveries about Free Will and Selfhood Change our Ethical Practices?” from Neuroethics (2009).
Over the past few years, a number of authors in the new field of neuroethics have claimed that there is an ethical challenge presented by the likelihood that the findings of neuroscience will undermine many common assumptions about human agency and selfhood. These authors claim that neuroscience shows that human agents have no free will, and that our sense of being a “self” is an illusory construction of our brains. Furthermore, some commentators predict that our ethical practices of assigning moral blame, or of recognizing others as persons rather than as objects, will change as a result of neuroscientific discoveries that debunk free will and the concept of the self. I contest suggestions that neuroscience’s conclusions about the illusory nature of free will and the self will cause significant change in our practices. I argue that we have self-interested reasons to resist allowing neuroscience to determine core beliefs about ourselves.
Will neuroscience find itself in the predicament of Cassandra?
An additional catalyst for conversation was a very similar paper by Roskies, “Neuroscientific Challenges to Free Will and Responsibility” from Trends in Cognitive Sciences (2006), whose abstract is as follows:
Recent developments in neuroscience raise the worry that understanding how brains cause behavior will undermine our views about free will and, consequently, about moral responsibility. The potential ethical consequences of such a result are sweeping. I provide three reasons to think that these worries seemingly inspired by neuroscience are misplaced. First, problems for common-sense notions of freedom exist independently of neuroscientific advances. Second, neuroscience is not in a position to undermine our intuitive notions. Third, recent empirical studies suggest that even if people do misconstrue neuroscientific results as relevant to our notion of freedom, our judgments of moral responsibility will remain largely unaffected. These considerations suggest that neuroethical concerns about challenges to our conception of freedom are misguided.
Our participants often bring in relevant quotes from other sources. In this case, one contentious feature of the article was Kaposy’s rather unfavorable forecast regarding the extent to which neuroscientific perspectives will impact humanity’s self-understanding – that “our social norms are stronger determinants of what we believe than any esoteric field of science.” A passage from Steven Pinker’s book The Blank Slate suggests that Messrs. Newton, Darwin, and Mendel, among others, would appreciate a brief word with Kaposy:
Newton’s theory that a single set of laws governed the motions of all objects in the universe was the first event in one of the great developments of human understanding: the unification of knowledge, which the biologist E. O. Wilson has termed consilience. Newton’s breaching of the wall between the celestial and the terrestrial was followed by a collapse of the once equally firm (and now equally forgotten) wall between the creative past and the static present. That happened when Charles Lyell showed that the earth was sculpted in the past by forces we see today (such as earthquakes and erosion) acting over immense spans of time.
The living and the nonliving, too, no longer occupy different realms. In 1628 William Harvey showed that the human body is a machine that runs by hydraulics and other mechanical principles. In 1828 Friedrich Wohler showed that the stuff of life is not a magical, pulsating gel but ordinary compounds following the laws of chemistry. Charles Darwin showed how the astonishing diversity of life and its ubiquitous signs of design could arise from the physical process of natural selection among replicators. Gregor Mendel, and then James Watson and Francis Crick, showed how replication itself could be understood in physical terms.
The unification of our understanding of life with our understanding of matter and energy was the greatest scientific achievement of the second half of the twentieth century. One of its many consequences was to pull the rug out from under social scientists like Kroeber and Lowie who had invoked the “sound scientific method” of placing the living and the nonliving in parallel universes …
This leaves one wall standing in the landscape of knowledge, the one that twentieth-century social scientists guarded so jealously. It divides matter from mind, the material from the spiritual, the physical from the mental, biology from culture, nature from society, and the sciences from the social sciences, humanities, and arts. … this wall, too, is falling.
When considered in these terms, the potential impact of advancing knowledge in neuroscience may indeed appear more impressive. Of course, how can we know for sure until after the fact? Presumably we cannot; as one journal clubber observed, Kaposy may be right to challenge the future-tense, indicative-mood certainty of, e.g., Tancredi (2005) or Greene and Cohen (2004), but this does not justify his own certainty to the contrary; many of Kaposy’s predictions are as stark as those he takes to task.
Further discussion focused on Kaposy’s discussion and subsequent (largely implicit) dismissal of cognitive polyphasia as an explicitly adopted method for simultaneously entertaining the otherwise conflicting perspectives of neuroscience and pre-theoretic human understanding. Opinion was mixed; some identified the tactic as precisely what they have employed all along to get by as both scientists and human beings, or characterized the idea – which consists of holding contradictory ideas in different contexts, essentially maintaining multiple ‘belief-boxes’ for different cognitive applications – as a “clean-burning alternative” to more exclusivity-oriented views of scientific truth versus phenomenological truth. Others worried that a move toward cognitive polyphasia dispenses too easily with a basic standard for consistency of belief. Moreover, it was suggested that polyphasia may well turn out to be impossible to maintain for such deep beliefs as free will and selfhood, especially if we try to begin thinking of dearly valued relationships – especially familial ones – as “robotic” in some sense.
To editorialize briefly, I had a problem with Kaposy’s defense of compatibilism on logical grounds. If, as we all seem to agree the case is alleged to be, neuroscience has placed both the notions of free well and a deep self under conceptual fire, then we cannot leave one out in the open while defending the other. But the author, in explaining compatibilism, uses language that is completely rife with I, me, and my. Essentially, the intelligibility of compatibilism in this case is parasitic on a notion of selfhood which in turn is indicted by the very perspective that compatibilism is trying to placate. (If this sounds familiar, it’s because I’ve been on a big Quine kick all week, and the general form of the critique is straight out of Two Dogmas.) This does not necessarily make compatibilism a circular concept, but unless Kaposy is able to shore up selfhood without making some appeal to free will, his compatibilist option makes for a poor argumentative strategy.
As always, we invite commentary and discussion from our readers.