Being-In-The-World-With-Grant-Gillett

The National Core for Neuroethics was recently fortunate to have Grant Gillett, Professor of Medical Ethics at the University of Otago, visit and spend time with the group as Cecil H. and Ida Green Visiting Professor.

Gillett’s interests are both extensive and diverse, and draws heavily upon his training in neurosurgery, post-structuralism, and analytic philosophy in his dialogue and writing. Although the thinking of individuals such as Michel Foucault, Aristotle, and Frederich Nietzsche are embedded in many Gillett’s texts, he has a particular affinity for the likes of Immanuel Kant and Ludwig Wittgenstein. From time to time one will catch a glimpse of phenomenologists such as Martin Heidegger and Maurice Merleau-Ponty when reading Gillett, in addition to contemporary theorists such as Daniel Dennett.

Professor Gillett gave a series of four lectures in his capacity as Green College Visiting Professor, although he spent a significant amount of time with other groups in both the University community and Vancouver area sharing his wisdom, insight, and love of knowledge.

The four lectures given by Gillett are listed below, with their accompanying abstract. At the end, I will briefly comment on the threads that tie his philosophy together, as opposed to commenting on each lecture individually.

The Cultural Brain: A Neural Palimpsest

To what extent is the brain a biological system best understood in terms of natural science and to what extent is the brain a cultural product? If the brain is a hybrid, how should neuroethics approach the contentious issues raised by contemporary neuroscience such as free will and the nature of consciousness?

The Warrior Gene: A Case for Neuroethical Diagnosis

What is the warrior gene and why is it over-represented in some racial groups? Is the warrior gene the reason why certain groups are disproportionately highly ranked in the statistics of societal discontent, or should we look further?

Neuroethics and Hysteria: The Mind and Neurological Disorder

It is paradoxical that whereas we normally assume that the mind is an elaboration based on underlying brain processes, hysteria forces us to explore a phenomenon where the mind causes a presentation that looks neurological. What is going on in hysteria and how is that a fairly common tendency produces a disorder in which the person concerned does not seem to know what is going on in his or her own mind/brain?

Neuroethics and Human Identity

Neuroethics encounters significant questions of human identity when we examine the moral rights of embryos, people in persistent vegetative states and other forms of brain damage, and cyborgs. What kind of society are we in danger of producing if we allow a functional conception of neuroethics to prevail in our self-understanding?

The philosophical threads that weave Gillett’s thinking (which draw from the spool of theorists listed above) direct attention to the subjective brains of human beings and that individual’s psyche or soul (Gillett, in discussion, goes to considerable lengths to explain that the word ‘psyche’ is derived from the Aristotelian psuche meaning ‘soul’ as opposed to the Judeo-Christian concept of ‘soul’. Thus a neo-Aristotelian view suggests that neurocognitive skills lay the foundation of an individual’s soul). For Gillett, the subjective brain reflects the life of dasein and mitsein (which Gillett compounds into ‘being-in-the-world-with-others’) of which identity reflects how that human being engages with the complex human life-world (following Husserl). And so, the human brain is involved in a cybernetic relationship with the world which makes the human being a relational creature – that is, one’s neural network is inscribed by biology, culture, social and historical context as so becomes an embodied subject. What is more, from a Gillettian/neo-Aristotelian perspective, human subjectively is enmeshed in neurological processes and functions and his or her place in the life-world. Human actions are thus constrained – not determined – by these contingencies which makes each human subject, effectually, unique.

From all of us here at the Core, we thank Professor Gillett for being ever so generous with his time and for truly enriching and enhancing both our brains and lived experiences.

Day 2: Social Issues Roundtable

The Social Issues Rknights-of-the-round-tableoundtable got underway just after 1pm today, to a capacity crowd. Actually, the room was beyond capacity and people were spilling out to the periphery of the room. At the conclusion of the symposium, the line-up for Q&A was deep, and questions mainly were directed towards issue of science communication (Note: apparently the official title of the Social Issues Roundtable was “Engaging the Public on Ethical, Legal, and Social Implications of Neuroscience Research” but somehow I didn’t realize that). It is encouraging to see such interest. I briefly summarize presentations by Patricia Churchland, Barbara Sahakian, Jonathan Moreno, and Hank Greely below. The “roundtable” was moderated by Alan Leshner. Unfortunately the presenters were restricted to about 10 or so minutes each, so nobody could really dig deep into any of the issues.

Patricia Churchland – Muddle’s Fallacy: Responsibility Requires Cause-Free Choice

Although Churchland did not describe what Muddle’s Fallacy is (if it is anything at all), Professor Churchland’s focused her efforts by arguing that a ‘determined’ brain does not eliminate moral and legal responsibility. The problem this raises is how can both the law and society hold someone responsible for their actions if their actions are the end result in a series of pre-determined behaviours. Churchland also stated that society would likely not accept a premise that someone could not be held responsible – at least to some degree – for their actions. This argument is not new, and has been articulated since antiquity by the likes of Aristotle and David Hume, and more closer to us in elegant papers by Greene & Cohen and Adina Roskies.

Barbara Sahakian – Neuroethics and Society: Pharmacological Cognitive Enhancement

Barbara Sahakian briefly spoke to two issues during her short presentation: public engagement and cognitive enhancement. To engage the public in Neuroethics, Sahakian stated, neuroscientists from the undergraduate to graduate levels “need neuroethics teaching.” This, she claimed, will train future scientists to better communicate their research to the public. Although she didn’t say much more beyond that, Sahakian framed her argument as a matter of duty: she stated that scientists have an obligation to the public, particularly because most scientists are funded with public money. Second – and somewhat intermingled with the first but the connection was not entirely clear – Sahakian made a short case for the responsible use of cognitive enhancing drugs by healthy individuals. To see a longer discussion of this argument, see the commentary she co-authored with co-roundtabler Greely and others in Nature.

Jonathan D. Moreno – Neuroethics and National Security

Dr. Moreno probably gave the most entertaining talk of all presenters. Outlining some of the issues in his book Mind Wars, Moreno discussed the history of the brain sciences in issues surrounding (American) national security. Moreno spoke of some major actors in this history such as military psychiatrist Sidney Gottlieb and Henry K Beecher, and Beecher’s involvement with the CIA and drug experiments of the early 1950s. It was unfortunate that Moreno had limited time. His description of modern uses of neurotechnology by Defense services (e.g. Oxytocin and torture) was particularly intriguing, and stated that neuroscientists are not so far removed from the equation, as their work is consistently being used to inform major policy documents by the National Academies, such as Emerging Cognitive Neuroscience and Related Technologies.

Hank Greely – Possible Societal Reactions to – and Rejections of – Neuroscience’s understanding of the Mind.

The main theme underlying Greely’s talk was whether or not advances in neuroscience would instigate a conflict similar to the creationist/evolution wars. To illustrate his argument he drew upon three points:

1. What is it in neuroscience that makes people nervous;

2. What are the probabilities of a “neuroscience war”; and

3. Pragmatic advice to limit the possibility of a bad outcome.

Greely’s first point, similar to Churchland’s, had much to do with moral intuitions. For instance, he discussed (the fact…?) that neuroscience does not see evidence of a soul (he made a remark – jokingly I presume – about a ‘soul spot’ in the brain), and, again, similar to the arguments made earlier by Pat Churchland, that neuroscience’s threat to free will is incredibly unsettling to most people (see some really fascinating work on folk intuitions on free will, responsibility, and determinism). Prof Greely also alluded to the uniqueness (or perhaps not as he was careful to say) of human consciousness and how that separates us from other animals (although many primates do indeed have similar brains to human beings). In discussing this, Greely referred to some recent controversies in human chimera research (e.g., the human neuron mouse) and responses to the science fueled in religious-political ideology (i.e., man was created in god’s image), including efforts by US Senator Sam Brownback and his Human Chimera Prohibition Act. Although he didn’t think the prospect of a “neuroscience war” akin to the creationism/evolution debate was likely, he gave some pragmatic advice which, it seemed, to strike an uncomfortable chord with one audience member. In his pragmatic advice, Greely stated that neuroscience researchers should not go out of their way to offend, and ought to be careful about their claims. True, while exercising caution and making efforts to limit the sensationalizing of claims is something of value, this particular audience member interpreted the latter half of the statement to mean that scientists should not venture into areas of “forbidden knowledge” with their work. I did not catch all of Greely’s remark, but it is my belief that perhaps his statement was misinterpreted. If any blog readers attended this session and caught Greely’s response, clarification would be appreciated.

Brain Matters – A Conference Report

bm4 copyLast week, Daniel Buchman and I travelled to Halifax to attend the Brain Matters Conference.  About 125 people attended – philosophers, sociologists, anthropologists, neuroscientists, and more.  The meeting was quite successful, particularly as it afforded lots of time for hallway conversations.  In the paragraphs below, Daniel and I summarize our impressions of the plenary lectures.

David Healy from Cardiff University gave the opening plenary.  Many will know that Healy had his job offer famously withdrawn by the University of Toronto, with much speculation revolving around the role that pharmaceutical companies had as a guiding hand in the background (for more about the matter, see here.)  At the time, Healy was one of the early voices raising concerns about both efficacy and safety of SSRIs, but in his plenary he focused on a more modern issue: conflict of interest in the pharmaceutical industry.  While there is a abundance of evidence out there about the topic these days,  unfortunately, Healy’s presentation was laced with more innuendo than facts, as was evident in the ensuing discussion when several people challenged him on his evidence and he was forced to back down.

The Friday morning plenary session belonged to Caroline Tait, an Assistant Professor in the Department of Native Studies at the University of Saskatchewan. Tait, a Medical Anthropologist who identifies as Métis, spoke on how most First Nations or Métis people experience, interpret, and respond to illness, and how this understanding has implications in the formation of Indigenous identity and medical morality.  Tait argued for combining ethical regimes where local Indigenous worldviews are placed on an equal playing field to Western ethical principles. Tait described the case examples of Jordan River Anderson, a child from Norway First Nation House who died from a rare neuromuscular disorder called Carey Fineman Ziter syndrome while the provincial and federal governments fought incessantly on who was financially responsible for his home care. Professor Tait emphasized the reality of how many Indigenous peoples, and children in particular, fall through the cracks of both provincial/territorial and federal systems. The tragic story prompted a Private Members motion called Jordan’s Principle that was passed in the Canadian House of Commons in 2007, and other provinces have since moved to implement, to some extent, Jordan’s Principle to ensure access to government services for First Nations children with complex health challenges.

On Friday afternoon, Walter Glannon from the University of Calgary spoke to a question that has riddled the minds of philosophers since antiquity: do human beings have free will?  Glannon stated that recent studies in the brain sciences suggest that free will is a mere illusion (called the ‘threat’ of neuroscience). If this claim were true, Glannon suggested, society’s current practice of holding people accountable, both morally and legally, would be challenged. Glannon assumed a position of compatibilism and argued that free will and responsibility are not ‘threatened’ by recent empirical work in the brain sciences. Glannon referred to many of the usual suspects on both sides of the debate, such as the infamous Benjamin Libet studies and Daniel Wegner’s provocative book The Illusion of Conscious Will. Glannon also spoke to hard determinist claims, including Robert Kane’s “ultimate responsibility,”and  John Locke’s example of a man who wakes up locked (unknowingly) in a room. Hume’s “liberty of spontaneity,” Fischer’s “guidance control,” and others were offered as  in support of compatibilism. Despite the strength of the philosophical support, he unfortunately did not offer much in demonstrating the contribution of neuroscience to a compatibilist stance on free will. Although Wegner’s and Libet’s causal neurophilosophical models represented the brain science contribution, the only other evidence was a brief mention of Patrick Haggard’s work. Perhaps the most interesting part of the talk was the question period in which strong challenges were raised by members of the audience; at times Glannon answered his critics but other times he was forced to concede that he (or might we suggest, his brain?) could not marshall an appropriate riposte on the spot.

James Bernat from Dartmouth gave what amounted to a Master Class in Brain Death.  A neurologist who has been studying the issue for decades, Bernat took the group through the varying arguments about whether brain accurately represents the biology of human death or not.  Based in systems theory and the concept of emergence, he illustrated how modern technology has made the issue of brain death more challenging: not only has our ability to keep at least some components of biological processes ongoing for long periods of time in the absence of a functioning brain, but as transplantation has become more commonplace, the prospect of physician conflict of interest looms large.  A substantial part of his presentation represented a response to the arguments against the validity of brain death in the recently published study, “Controversies in the Determination of Death” by the U.S. President’s Council on Bioethics. To respond to the President’s Council report, he used the framework offered by the recent paper by Raphael Bonelli and colleagues. Bernat’s thoughtful conclusion was that brain death remains the most accurate concept of human death.

Jonathan H. Marks, Director of the Bioethics & Medical Humanities Program at Penn State, injected a healthy dose of skepticism into claims that neuroscience enhances national security.  Reviewing the various tools that have been trotted out over the years to extract information from individuals, he systematically debunked the value of everything from lie detection to waterboarding. [My own personal favorite is the letter by convicted spy Aldrich Ames sent to the Federation of American Scientists.  I particularly like the handwritten version.]  Of course it is not just the futility of these technologies that purport to reveal the ‘truth’, but the ethically dubious grounds on which they were famously deployed under the Bush administration’s “War on Terror” that Marks decried.

The meeting was brought to a close with a rather inspiring lecture by Neil Levy from the University of Melbourne and Oxford University.  Reminding us of Adina Roskies description of neuroethics as constituting the ethics of neuroscience and the neuroscience of ethics, pointed out that the field of neuroethics has an unprecedented opportunity to move forward using the rigorous tools of experimental philosophy (i.e. the Knobe effect – see also video below), applied to the neuroscience of ethics.  The take-home point was not lost on the audience, and Neil’s lecture (complete with a slide of an iconic burning armchair  – as per the video below) concluded the meeting on a resoundingly high note.

Music on (or projecting off?) your Mind

The Multimodal Brain Orchestra, led by the SPECS research group, recently performed their inaugural performance.  Oh, and there were no instruments.

“…four performers were fitted with caps littered with electrodes that take a real-time electroencephalograph [EEG] – an image of the brain’s electrical activity._45706524_brain_227_170

“There is a first violin, a second violin and so on, except that instead of violins they are brains,” says Dr Mura.

The graphs of those brain waves are projected onto one of two large screens above the orchestra. The performers launch sounds or affect their frequencies and modulations based on two well-characterised effects seen in EEGs: the steady-state visually evoked potential (SSVEP), and the so-called P300 signal.

When expectation is fulfilled, 300 thousandths of a second later, a signal known as the P300  appears in the EEG.

In the Multimodal Brain Orchestra, the P300 signal is registered – with a dot demarcating it on the EEG trace projected to the audience, so that they can see the effect of the performer’s thought – in turn launching a sound or recorded instrument.” (links added).

While research and exciting activities of this kind re-ignite important and fascinating dialogue around consciousness, I was particularly intrigued by the quote of Dr. Anna Mura, a biologist who is also the producer of the project:

“What we want to show here is the use of your brain without your body. Embodiment – we should get rid of it sometimes.”

So, I have to ask somewhat rhetorically: when do you ever use your body without your brain? Sure, there is autonomic activity of certain organs but without brain function they would cease to work. Now, I understand where Dr. Mura is going with this statement, which made me think of the notion of an extended mind, a topic in neurophilosophy and the philosophy of mind, which, according to Neil Levy, has “far reaching” implications for neuroethics. Broadly, the extended mind is that mental states extend beyond the skulls of the brains in which produce them. So for instance, conveying emotion in a music performance through an instrument is a claim of the extended mind. So, according to Levy, the extended mind thesis “alters the focus of neuroethics, away from the question of whether we ought to allow interventions into the mind, and toward the question of  which interventions we ought to allow and under what conditions.”

Further,  Grant Gillett would likely disagree that “we should get rid of” embodiment – actually, it is foreseeable that he would consider the idea impossible, and probably argue that dis-embodiment may only occur in a case where the brain is no longer able to embody the person, as in a condition such as locked-in-syndrome. Gillett would state that human subjectivity, in the brain is inscribed by a history of neurological, social, psychological, environmental, and other processes which embody meaning of being a human being-in-the-word-with-others.

Link to the BBC article here.

Hat tip to Ryan Nadel for “drawing my attention” to the piece.

daniel buchman