Communication with vegetative state patients: Dialogue or soliloquy?

By Ania Mizgalewicz and Grace Lee

The world first heard from Canadian Scott Routley this past week. Routley, who has been in a diagnosed vegetative state for the last 12 years, seemed to communicate to scientists via signals measures from blood flowing in his brain that he was not in pain. The finding caught the headline attention of major news sites and spurred vast public commentary. Comments ranged from fearful to hopeful about mind reading, clinical applications of technology, and the ability of this technology to allow patients to communicate their desires to live.

Leading neuroscientist Adrian Owen in London, Ontario, articulates that the technology currently allows patients to respond to yes or no questions, but may one day be used to aide in more interactive communication. Questions would center on daily living preferences, attempting to improve quality of life and health care.

The findings by Owen and his group are truly exciting and provide great hope to the historically neglected population of people with serious brain injuries. Here at the National Core for Neuroethics, we encourage more discussion about the ethical implications of this technology. Questions such as those surrounding decisions about end of life are far in the future. The focus at the present should thus remain on how to validate this technology to one day be used in the clinical setting. If clinical use will be feasible in the future, we would need to address questions about access to the technology and the impact that its availability would have on families of patients.

Great caution and restraint is needed when coupling this still emerging technology with concerns about mind reading, or clinical decision-making about end of life. Hype here unfairly detracts from the true value of this work. With one in five vegetative patients showing signs of consciousness in these studies, the focus should remain on improving their daily surroundings, providing them a means of communication, and supporting their family members. It should also spark a conversation on the effectiveness and validity of current clinical tests used to diagnose these patients at the bedside.

Top image: wellcome images / flickr
Bottom image: Noel A. Tanner / flickr

Sahakian on ‘Smart drugs’ at the Royal Institution

Photo: Murdo Macleod in the Guardian

Barbara Sahakian gave a talk at  the Royal Institution the other day on ‘Smart Drugs’.  You can listen to the talk here.   The talk received quite a bit of media attention, most notably in an article in the Guardian entitled, “A Pandora’s box full of smart drugs“.

Personally, I think that the data that methylphenidate and modafinil are bona fide cognitive enhancers is not as strong as many suggest, but there is little question that the pharmaceutical industry is gearing up to produce drugs that will satisfy this market (I hesitate to say need) in the years to come.  I was reassured that Barbara pointed out to the audience that exercise, both physical and mental, can provide effects that are comparable to what these drugs can offer.  Whether the audience heard that or not remains to be seen.

Someone who attended the lecture reported that, “She is disquietingly relaxed about it all; I wasn’t certain that she realises the power of what she is helping to unleash.”   This reminded me of the comment that David Healy made at the Brain Matters conference in Halifax in September 2009, where he opined that one of the problematic consequences of neuroethicists talking about cognitive enhancement is that it educates the populace that these compounds exist, and thereby might encourage their use.  Indeed, in the comments section of the Guardian article, someone calling themselves the ‘Rabid Racoon’ wrote,

I had no idea these drugs existed, thanks for informing me so I can go buy some.

p.s. slightly disapointed that the ‘ads by google’ which are putatibvely (sic) based on the content of the page you are looking at aren’t for online pharmacies Continue reading

You are not your brain scan!

Natasha Mitchell, host of the ever interesting All in the Mind series from ABC Radio, gave a talk this past July at the Adelaide Festival of Ideas entitled, “You are not your brain scan!“.  From the liner notes on the Slow TV website:

“The study of the brain has attracted extraordinary public interest in recent years, partly driven by major scientific breakthroughs in understanding the brain’s workings. To rely on brain scans, however, risks simplifying the science and equating our brain scans with destiny, much like the early years of genetics and reporting on genetics.  In this very entertaining and insightful talk at the Adelaide Festival of Ideas, Natasha Mitchell of ABC Radio (All in the Mind) introduces a healthy note of sense and caution to the discourse about what we can learn from studying brain scans.”

Unless you have been living in a cave somewhere (and maybe even if you have), you will have noticed that images of brain scans have suffused popular culture of late.  Natasha takes us through the pitfalls of believing that the brain scans tell us what we so desperately want to know, and along the way gives a pretty good overview of  The Neuro Meme, as well as the ways that not only the public but scientists have become seduced by the power of the image of the living brain.   [One of my favorite lines: "It's become a game of pin the thought on the neuron." ]   Natasha’s main point is that scientists might be making claims beyond what is technically or conceptually reasonable, and I, for one, stand up and enthusiastically applaud her for taking the imagers to task over the veracity of their claims. One need not even invoke the infamous dead fish fMRI to know that there has been a bit of hyperbole out there.

[Postscript:  I would have liked Natasha's presentation on its merits alone, but the fact that she pokes fun at the growth of neuro-everything, but then applauds neuroethics as one new subfield that is on the money, biased me even more.  I wonder if that is one of those brain things...]

On communicating neuroethics

sciencejournalismCommunicating science to the public, as we try to do on this blog, is difficult.  The challenges are many, but none so great as to find the right level at which to pitch the story.  We try not to get too mired in details lest we lose our readers, but those of us who come from a background steeped in scientific investigation are often tempted to do so simply because of the seductive elegance of the science.

Over at Atlantic Correspondents, David Shenk speaks directly to this point with a piece entitled “On the Art of Nonfiction” – essentially the text of a speech that he recently gave at the “Great Nonfiction Writers Lecture Series” at Brown University. Continue reading

Day 2: Social Issues Roundtable

The Social Issues Rknights-of-the-round-tableoundtable got underway just after 1pm today, to a capacity crowd. Actually, the room was beyond capacity and people were spilling out to the periphery of the room. At the conclusion of the symposium, the line-up for Q&A was deep, and questions mainly were directed towards issue of science communication (Note: apparently the official title of the Social Issues Roundtable was “Engaging the Public on Ethical, Legal, and Social Implications of Neuroscience Research” but somehow I didn’t realize that). It is encouraging to see such interest. I briefly summarize presentations by Patricia Churchland, Barbara Sahakian, Jonathan Moreno, and Hank Greely below. The “roundtable” was moderated by Alan Leshner. Unfortunately the presenters were restricted to about 10 or so minutes each, so nobody could really dig deep into any of the issues.

Patricia Churchland – Muddle’s Fallacy: Responsibility Requires Cause-Free Choice

Although Churchland did not describe what Muddle’s Fallacy is (if it is anything at all), Professor Churchland’s focused her efforts by arguing that a ‘determined’ brain does not eliminate moral and legal responsibility. The problem this raises is how can both the law and society hold someone responsible for their actions if their actions are the end result in a series of pre-determined behaviours. Churchland also stated that society would likely not accept a premise that someone could not be held responsible – at least to some degree – for their actions. This argument is not new, and has been articulated since antiquity by the likes of Aristotle and David Hume, and more closer to us in elegant papers by Greene & Cohen and Adina Roskies.

Barbara Sahakian – Neuroethics and Society: Pharmacological Cognitive Enhancement

Barbara Sahakian briefly spoke to two issues during her short presentation: public engagement and cognitive enhancement. To engage the public in Neuroethics, Sahakian stated, neuroscientists from the undergraduate to graduate levels “need neuroethics teaching.” This, she claimed, will train future scientists to better communicate their research to the public. Although she didn’t say much more beyond that, Sahakian framed her argument as a matter of duty: she stated that scientists have an obligation to the public, particularly because most scientists are funded with public money. Second – and somewhat intermingled with the first but the connection was not entirely clear – Sahakian made a short case for the responsible use of cognitive enhancing drugs by healthy individuals. To see a longer discussion of this argument, see the commentary she co-authored with co-roundtabler Greely and others in Nature.

Jonathan D. Moreno – Neuroethics and National Security

Dr. Moreno probably gave the most entertaining talk of all presenters. Outlining some of the issues in his book Mind Wars, Moreno discussed the history of the brain sciences in issues surrounding (American) national security. Moreno spoke of some major actors in this history such as military psychiatrist Sidney Gottlieb and Henry K Beecher, and Beecher’s involvement with the CIA and drug experiments of the early 1950s. It was unfortunate that Moreno had limited time. His description of modern uses of neurotechnology by Defense services (e.g. Oxytocin and torture) was particularly intriguing, and stated that neuroscientists are not so far removed from the equation, as their work is consistently being used to inform major policy documents by the National Academies, such as Emerging Cognitive Neuroscience and Related Technologies.

Hank Greely – Possible Societal Reactions to – and Rejections of – Neuroscience’s understanding of the Mind.

The main theme underlying Greely’s talk was whether or not advances in neuroscience would instigate a conflict similar to the creationist/evolution wars. To illustrate his argument he drew upon three points:

1. What is it in neuroscience that makes people nervous;

2. What are the probabilities of a “neuroscience war”; and

3. Pragmatic advice to limit the possibility of a bad outcome.

Greely’s first point, similar to Churchland’s, had much to do with moral intuitions. For instance, he discussed (the fact…?) that neuroscience does not see evidence of a soul (he made a remark – jokingly I presume – about a ‘soul spot’ in the brain), and, again, similar to the arguments made earlier by Pat Churchland, that neuroscience’s threat to free will is incredibly unsettling to most people (see some really fascinating work on folk intuitions on free will, responsibility, and determinism). Prof Greely also alluded to the uniqueness (or perhaps not as he was careful to say) of human consciousness and how that separates us from other animals (although many primates do indeed have similar brains to human beings). In discussing this, Greely referred to some recent controversies in human chimera research (e.g., the human neuron mouse) and responses to the science fueled in religious-political ideology (i.e., man was created in god’s image), including efforts by US Senator Sam Brownback and his Human Chimera Prohibition Act. Although he didn’t think the prospect of a “neuroscience war” akin to the creationism/evolution debate was likely, he gave some pragmatic advice which, it seemed, to strike an uncomfortable chord with one audience member. In his pragmatic advice, Greely stated that neuroscience researchers should not go out of their way to offend, and ought to be careful about their claims. True, while exercising caution and making efforts to limit the sensationalizing of claims is something of value, this particular audience member interpreted the latter half of the statement to mean that scientists should not venture into areas of “forbidden knowledge” with their work. I did not catch all of Greely’s remark, but it is my belief that perhaps his statement was misinterpreted. If any blog readers attended this session and caught Greely’s response, clarification would be appreciated.

Neuroimaging In Focus: The Hype

tda0043lThe Dana Foundation‘s Cerebrum recently featured an article by Russel A. Poldrack, which addressed some common misconceptions about neuroimaging research. The piece starts off with this quote:

Colorful brain images may tempt researchers to make claims that outpace solid scientific data—and may tempt the public to believe those claims. In particular, although brain imaging has provided solid evidence of alterations in brain structures and functions associated with many psychiatric disorders, it can be used neither to diagnose such disorders nor to determine exactly how treatments work—at least not yet. Keeping some key ideas in mind can help us evaluate the next report of a brain-imaging “breakthrough.”

The issues and problems with the interpretation of brain images is no stranger to neuro-ethics discourse. For example, Illes et al., have provided conversation on the ethical, legal and social issues (ELSI) of advanced neuroimaging, in addition to other papers that explore epistemological and ethical issues that come with the current limitations of imaging technology (see: e.g., Racine, Bar-Illan & Illes, 2005; Illes 2007; Illes, Racine, & Kirschen 2006). Others, such as McCabe and Castel and Weisberg et al., have taken an empirical approach to understand the “temptation” of brain images by testing human participants directly. These latter studies demonstrated that presenting brain images or neuroscience information with research resulted in higher ratings of scientific reasoning, or as a more satisfying explanation – even if the information presented was “logically irrelevant”  (as was the case in the Weisberg et al study).

The Dana article on “pipe dream” neuroimaging comes right on the heels of a couple other reports on some crucial methodological and statistical flaws in the data analysis process of brain imaging. The first, which received considerable online debate in blogs and elsewhere, was the paper “Voodoo Correlations in Social Neuroscience.” The lead author, Ed Vul, suggests that the analytic methods used in imaging indicate that many studies used faulty techniques to obtain their data.  A second paper, currently in press for Nature Neuroscience, argues against “double-dipping” in systems neuroscience research. Double-dipping refers to “the use of the same dataset for selection and selective analysis,” and “will give distorted descriptive statistics and invalid statistical inference whenever the results statistics are not inherently independent of the selection criteria under the null hypothesis.” This, no doubt, gives more fuel to the fire for the extreme sceptics of neuroscience, medicine, and science in general. Who will guard the guardians of neuroscience?

What the Cerebrum article got me thinking of in particular, aside from what I briefly refer to above, and the – loosely – hermaneutical process of interpreting brain images, is how we as human beings relate to facts that we receive from science and medicine, and our relationship with technology; in particular technology that provides facts associated with the self. Joseph Dumit has explored this concept, and the persuasive power of brain images in his book Picturing Personhood. Using a neuro-anthropological perspective (pardon the neologism), Dumit examines the power that brain images have, as represented in the mass media, in altering understandings people have of their own bodies and brains – a term he calls the “objective self”. I have become particularly fascinated in the notion of objective self-fashioning, specifically as it relates to the research field of imaging genetics (or imaging genomics). In an up-coming paper I’m giving at the 20th Annual Canadian Bioethics Society Conference in Hamilton, ON, I argue that the power (and persuasiveness), increased sensitivity, and less statistical variability of the combined technologies (genetics + imaging) requires a heightened clinical sensitivity to the objective-self-fashioning process in the translation of knowledge derived from imaging genetics. This is not an argument for neuro- or genetic-exceptionalism.  I welcome feedback on this claim as I develop my paper.

I will close with a quote from Hall and colleagues on addiction, the notion of disease, and neuroimaging and the “seductive allure” of neuroscience explanations:

A ‘disease’ that can be ‘seen’ in the many-hued  splendour of a PET scan carries more conviction than one justified by the possibly exculpatory self-reports of addicts who claim that they are unable to control their drug use (p.867).