The neuro-meme


It seems as if the neuro-meme has arrived.

Over at Nature Reviews Neuroscience, there is an article by Giovanni Frazzetto, Research Fellow at the BIOS Centre of the London School of Economics and Suzanne Anker, Chair of the Fine Arts Department at the School of Visual Arts in New York, entitled Neuroculture.

“Neuroscience addresses questions that, if resolved, will reveal aspects of our individuality. Therefore neuroscientific knowledge is not solely constrained within laboratories, but readily captures the attention of the public at large. Ideas, concepts and images in neuroscience widely circulate in culture and are portrayed in literature, film, works of art, the mass media and commercial products, therefore shaping social values and consumer practices. The interaction between art and science offers an opportunity to make the scientific community and the public aware of the social and ethical implications of the scientific advances in neuroscience.”

In the same week, over at the literary magazine N+1 editor Marco Roth has a great article entitled The Rise of the Neuronovel.  Here is the opening gambit:

Continue reading

Why fMRI is unsatisfying – a neuronal perspective

461889a-f1.2Each year when the Society for Neuroscience Meeting rolls around, all of the major journals devote extra space to neuroscience, publishing hot articles to attract the attention of the 30,000 plus attendees at the conference.  This year is no exception, and one of the most important articles came out this past week in Nature with the heady title “Intracellular dynamics of hippocampal place cells during virtual navigation“.  The paper, by Chris Harvey, Forrest Collman, Daniel Dombeck & Dave Tank is a tour de force investigation which combines new technology with insightful experimental manipulations and shows, according to an accompanying commentary by Doug Nitz, that “it is not impossible to examine brain correlates of higher cognitive processes and at the same time identify their underlying causes at the cellular level”.

The detailed results are probably too technically specific for most people in the field of neuroethics, but this study highlights some of the reasons that hard-core neuroscientists view fMRI with disdain.  Given the prominence that imaging the human brain has come to play in neuroethical discourse, I encourage readers to take a few moments to at least try to appreciate what the issues might be.

First, let’s take a look at what Dave Tank’s group at Princeton have done.  For over 35 years, neuroscientists have known that the firing rate of a subset of hippocampal pyramidal cells (the so-called place cells) change in predictable fashion as the animals navigate through a spatial environment.  In particular, the firing rate of a place cell reflects both the animal’s present spatial position and the path the animal has taken to reach that position.  Think about that for a second: the output of a single neuron reflects a highly nuanced and information rich algorithm.  But it does not stop there.  When multiple place cells are recorded at the same time, they exhibit a phenomenon called phase precession.  Nitz’ commentary sums it up nicely:

The firing order for a set of hippocampal place cells with partially overlapping place fields is found to match the animal’s physical trajectory corresponding to those fields. Phase precession stands as perhaps the most robust example of temporal coding of information in the mammalian brain.

Continue reading

Neuroscience and the Ethics of Coercive Interrogation

071105_watertortureThe issue of using tactics of ‘coercive interrogation’ – and by extension torture – to extract information from individuals is a long-standing, unsettled, and complicated debate. Torture, in particular the ethics of torture, has come under a more scrutinized focus in the wake of 9/11, the “war on terror”, and more recently as public attention was drawn to the abuses occurring at Abu Ghraib. Currently, torture is prohibited under International Law, yet it is believed that many countries around the world still continue to implement torture as a means of obtaining sensitive information, particularly as it relates to information around acts of terrorism.

The ethics of ‘coercive interrogation’ or ‘torture’ (note: this post is not concerned whether or not there is a distinction, or even a moral distinction, between ‘coercive interrogation’ and ‘torture’. This post assumes that they are the same. For a broader discussion, see Canada’s leader of the Official Opposition, Michael Ignatieff and his Lesser Evil argument) is often divided between deontological and utilitarian or consequentialist arguments. For example, claiming torture as a violation of human rights has roots in deontology, while others may justify torture by claiming that many lives have been saved as a result of coercive interrogation techniques, and this stems from utilitarian thinking.

In a recent early access article* in Trends in Cognitive Science, Shane O’Mara of Trinity College in Dublin raises an objection to torture, but this time the objection lies on scientific grounds. O’Mara reports that in using coercive interrogation techniques a great assumption is made: the extreme stress, anxiety, and shock of torture and interrogation tactics have no impact on the brain and its memory systems. O’Mara demonstrated that this incorrect assumption was actually made by the previous Bush Administration. In fact, O’Mara states, the philosophy behind the interrogations was “based on the idea that repeatedly inducing shock, stress, anxiety, disorientation and lack of control is more effective than standard interrogatory techniques in making suspects reveal information.”  Thus, retrieving true knowledge through a means in which the subject is under extreme stress and anxiety will not occur, as the brain has changed significantly and may inhibit the processes contributing to long-term memory retrieval.

Memory is a complex system of cognition whereby malleable neural systems actively process and retain information, and reconstruct past experiences. As O’Mara outlines in his review, stress hormones such as cortisol and adrenaline inhibiting memory retrieval is not a new idea. He thus argues that harsh torture may motivate prisoners to ‘talk’, but the impact on memory retrieval may prevent content to be accurately revealed from long-term memory (and, as he rightly notes, they talk so they won’t be water-boarded). This important knowledge will no doubt further support the human rights argument against torture, as perhaps even more long-term harm to brain tissue and executive function may be inflicted on the prisoner as a result. However the larger question still remains: will this knowledge change our practices in emergencies or times of war? Or, for those interested in the implications of neuroethics, will our greater understanding of the brain actually separate the is from the ought?

This discussion brings to mind the “ticking time bomb” example, often raised in debates about the value of torture. The example goes something like this:

You are made aware of an bomb that is about to explode in the downtown of some large metropolis that will kill many people. The individual who knows how to defuse the bomb is in your custody and refuses to tell you the whereabouts of the bomb or how to diffuse it. The only way to extract the information from him or her is to torture him or her. Should the person be tortured?

Though some groups such as Amnesty International have attempted to “defuse” the argument, the emotional nature of the time-bomb scenario (e.g., “would you permit torture so that thousands of innocent people will be saved?) allows us to briefly delve into the moral psychology of the dilemma for a moment. Mara does, actually, refer to this time bomb problem in his paper. Emphasizing the neuroscience, he suggests “that torture is as likely to elicit false as well as true information, and that separating the one from the other will be very difficult.” True, this may be the case for individuals who are ‘coercivelly interrogated’ over long periods of time. But if we return to our time-bomb example, it is unlikely that the suspect will have difficulty recalling – assuming no major brain injury or trauma was incurred during the interrogation – such important information which has been deeply encoded in the brain as where the bomb may be (episodic memory), and how to diffuse it (procedural memory) over an extremely short and limited period of time (the bomb is ticking).

And so, regrettably, I am unconvinced or rather not so optimistic, of the larger question. Perhaps because of a fear-based culture of North America, torture, even in emergency or war-time scenarios, may still, by some, be deemed permissible – even in “ticking time-bomb” scenarios. Ideally I can only hope, but I am skeptical that the neuroscience knowledge described by O’Mara will have any impact on the ethics of torture.

*I couldn’t access the article from the Journal’s website, but found a non-proofed copy here.

What is normal, anyway?


Hans Asperger (1906–1980) at work in the University Pediatric Clinic, Vienna performing a psychological test of a child.

Over at the Guardian, there is a delightful piece about an adolescent boy with a form of autism spectrum disorder known as Asperger’s disease. The entire article is worth reading for its insight into the life of an individual with Asperger’s, but one of the most telling lines emerges when the author tells us,

I begin to see what his mother means when she says Asperger’s can be more complex than the stereotypes suggest. “If there was a cure for Asperger’s,” she says, “I wouldn’t want it. Al’s just himself.”

Alex echoes his mother’s comments.

“I don’t think I’ve got a disability. I like being me.”

When patients say that they prefer the situation that they find themselves in, it is worth stopping and asking if the medicalization machine is moving too far too fast.

Hans Asperger first described the phenomenon in 1944, but the diagnosis of Asperger’s did not become official until 1992 when it was included in the International Classification of Diseases (ICD-10); in 1994, it was included in the Diagnostic and Statistical Manual of Mental Disorders, DSM-IV.  It is worth quoting from the ICD-10.

A disorder of uncertain nosological validity, characterized by the same type of qualitative abnormalities of reciprocal social interaction that typify autism, together with a restricted, stereotyped, repetitive repertoire of interests and activities. It differs from autism primarily in the fact that there is no general delay or retardation in language or in cognitive development. This disorder is often associated with marked clumsiness. There is a strong tendency for the abnormalities to persist into adolescence and adult life. Psychotic episodes occasionally occur in early adult life.

As it turns out, it seems likely that despite the ICD’s disclaimer of uncertain nosological validity for the diagnosis, there are indeed individuals out there with Asperger’s, and it is important to recognize them.  The important question is whether it should be considered a disease or not.  This question is raging as the field gears up for the arrival of DSM-V (I previously wrote about this here). In a broadside at the process that is being used to develop the new version of DSM, Allen Frances, the individual who chaired the DSM-IV Task Force, argues that the approach being taken is way off track.  The earnest group engaged in the herculean task of revisiting DSM push back. So it goes in academic medicine. [For a definitive historical look at DSM, I highly recommend Christopher Lane's book Shyness.]

What concerns us is not squabbling over process or priority, but rather the impact that all of this has on individuals and society at large.  Amid a number of concerns, Dr. Frances rightly reserves his strongest objection to the potential of DSM-V to further medicalize normalcy.

Undoubtedly, the most reckless suggestion for DSM-V is that it include many new categories to capture the subthreshhold (eg, minor depression, mild cognitive disorder) or premorbid (eg, prepsychotic) versions of the existing official disorders. The beneficial intended purpose is to improve early case finding and promote preventive treatments. Unfortunately, however, the DSM-V Task Force has failed to adequately consider the potentially disastrous unintended consequence that DSM-V may flood the world with tens of millions of newly labeled false-positive “patients.” The reported rates of DSM-V mental disorders would skyrocket, especially because there are many more people at the boundary than those who present with the more severe and clearly “clinical” disorders. The result would be a wholesale imperial medicalization of normality that will trivialize mental disorder and lead to a deluge of unneeded medication treatments—a bonanza for the pharmaceutical industry but at a huge cost to the new false-positive patients caught in the excessively wide DSM-V net. They will pay a high price in adverse effects, dollars, and stigma, not to mention the unpredictable impact on insurability, disability, and forensics.

Dr. Frances’ goes on to say,

The incredible recent advances in neuroscience, molecular biology, and brain imaging that have taught us so much about normal brain functioning are still not relevant to the clinical practicalities of everyday psychiatric diagnosis. The clearest evidence supporting this disappointing fact is that not even 1 biological test is ready for inclusion in the criteria sets for DSM-V.

So long as psychiatric diagnosis is stuck at its current descriptive level, there is little to be gained and much to be lost in frequently and arbitrarily changing the system. Descriptive diagnosis should remain fairly stable until, disorder by disorder, we gradually attain a more fundamental and explanatory understanding of causality.

Here we get to the heart of the dilemma.  The field of psychiatry struggles to help patients, but the truth is that the neurosciences have yet to reveal the causes of psychiatric disorders.  Without objective criteria to guide them, physicians fall back on descriptors which are imprecise and therapies which do not treat the underlying (and still unknown) pathology. Lacking the confidence to distinguish normal behavior from diseased, the field inadvertently medicalizes normalcy.

And Alex, the boy with Asperger’s, finds himself squarely in the crosshairs of this raging dispute.

Banff: Neuroscience, Ethics, and Public Communication

Communication is the focus of our meeting this weekend at the Banff Centre. Neuroscientists, ethicists, and journalists will collaborate to explore unique modes of engaging the public in the ethics of brain science. The participants are renowned experts in their field and bring with them a world of purpose and promise.

Continue reading