The Social Issues Roundtable got underway just after 1pm today, to a capacity crowd. Actually, the room was beyond capacity and people were spilling out to the periphery of the room. At the conclusion of the symposium, the line-up for Q&A was deep, and questions mainly were directed towards issue of science communication (Note: apparently the official title of the Social Issues Roundtable was “Engaging the Public on Ethical, Legal, and Social Implications of Neuroscience Research” but somehow I didn’t realize that). It is encouraging to see such interest. I briefly summarize presentations by Patricia Churchland, Barbara Sahakian, Jonathan Moreno, and Hank Greely below. The “roundtable” was moderated by Alan Leshner. Unfortunately the presenters were restricted to about 10 or so minutes each, so nobody could really dig deep into any of the issues.
Patricia Churchland – Muddle’s Fallacy: Responsibility Requires Cause-Free Choice
Although Churchland did not describe what Muddle’s Fallacy is (if it is anything at all), Professor Churchland’s focused her efforts by arguing that a ‘determined’ brain does not eliminate moral and legal responsibility. The problem this raises is how can both the law and society hold someone responsible for their actions if their actions are the end result in a series of pre-determined behaviours. Churchland also stated that society would likely not accept a premise that someone could not be held responsible – at least to some degree – for their actions. This argument is not new, and has been articulated since antiquity by the likes of Aristotle and David Hume, and more closer to us in elegant papers by Greene & Cohen and Adina Roskies.
Barbara Sahakian – Neuroethics and Society: Pharmacological Cognitive Enhancement
Barbara Sahakian briefly spoke to two issues during her short presentation: public engagement and cognitive enhancement. To engage the public in Neuroethics, Sahakian stated, neuroscientists from the undergraduate to graduate levels “need neuroethics teaching.” This, she claimed, will train future scientists to better communicate their research to the public. Although she didn’t say much more beyond that, Sahakian framed her argument as a matter of duty: she stated that scientists have an obligation to the public, particularly because most scientists are funded with public money. Second – and somewhat intermingled with the first but the connection was not entirely clear – Sahakian made a short case for the responsible use of cognitive enhancing drugs by healthy individuals. To see a longer discussion of this argument, see the commentary she co-authored with co-roundtabler Greely and others in Nature.
Jonathan D. Moreno – Neuroethics and National Security
Dr. Moreno probably gave the most entertaining talk of all presenters. Outlining some of the issues in his book Mind Wars, Moreno discussed the history of the brain sciences in issues surrounding (American) national security. Moreno spoke of some major actors in this history such as military psychiatrist Sidney Gottlieb and Henry K Beecher, and Beecher’s involvement with the CIA and drug experiments of the early 1950s. It was unfortunate that Moreno had limited time. His description of modern uses of neurotechnology by Defense services (e.g. Oxytocin and torture) was particularly intriguing, and stated that neuroscientists are not so far removed from the equation, as their work is consistently being used to inform major policy documents by the National Academies, such as Emerging Cognitive Neuroscience and Related Technologies.
Hank Greely – Possible Societal Reactions to – and Rejections of – Neuroscience’s understanding of the Mind.
The main theme underlying Greely’s talk was whether or not advances in neuroscience would instigate a conflict similar to the creationist/evolution wars. To illustrate his argument he drew upon three points:
1. What is it in neuroscience that makes people nervous;
2. What are the probabilities of a “neuroscience war”; and
3. Pragmatic advice to limit the possibility of a bad outcome.
Greely’s first point, similar to Churchland’s, had much to do with moral intuitions. For instance, he discussed (the fact…?) that neuroscience does not see evidence of a soul (he made a remark – jokingly I presume – about a ‘soul spot’ in the brain), and, again, similar to the arguments made earlier by Pat Churchland, that neuroscience’s threat to free will is incredibly unsettling to most people (see some really fascinating work on folk intuitions on free will, responsibility, and determinism). Prof Greely also alluded to the uniqueness (or perhaps not as he was careful to say) of human consciousness and how that separates us from other animals (although many primates do indeed have similar brains to human beings). In discussing this, Greely referred to some recent controversies in human chimera research (e.g., the human neuron mouse) and responses to the science fueled in religious-political ideology (i.e., man was created in god’s image), including efforts by US Senator Sam Brownback and his Human Chimera Prohibition Act. Although he didn’t think the prospect of a “neuroscience war” akin to the creationism/evolution debate was likely, he gave some pragmatic advice which, it seemed, to strike an uncomfortable chord with one audience member. In his pragmatic advice, Greely stated that neuroscience researchers should not go out of their way to offend, and ought to be careful about their claims. True, while exercising caution and making efforts to limit the sensationalizing of claims is something of value, this particular audience member interpreted the latter half of the statement to mean that scientists should not venture into areas of “forbidden knowledge” with their work. I did not catch all of Greely’s remark, but it is my belief that perhaps his statement was misinterpreted. If any blog readers attended this session and caught Greely’s response, clarification would be appreciated.