Jonathan Haidt, Professor of Psychology at the University of Virginia and author of The Righteous Mind has been visiting UBC the past few days, and he stopped by at the National Core for Neuroethics to discuss a variety of issues in which we have a common interest. While he was here, he was kind enough to sit with me and have a conversation about groupish genes, the response to his upcoming appearance on the Colbert Report, and current events.
Retribution, the dictionary tells us, is the dispensing of punishment for misdeeds. Derived from the Latin re tribuere, it literally means to pay back. We humans have strong retributive instincts, and it is often said that this behaviour arose as a product of our evolution as social beings: the threat of retribution enforces social norms, and was among the features that increased the likelihood of cooperation among members of society in the early years of human evolution. Given that cooperation confers significant adaptive advantages to the group, retributive norms flourished, and whether via genes or enculturation, the desire for retribution has been passed on to us.
The value of retributive impulses in the modern world is more difficult to discern. We humans are noted for having the ability to not only act in a manner that is instinctive, but also to reflect upon the propriety of our actions. Amongst philosophers, retribution is often contrasted with consequentialism, the notion that the response to a misdeed should produce the best result for society. Consequentialism is possible because the modern human brain is able to reason, and by so doing we are able to anticipate near, medium and long-term futures: we can decide whether the best response to a misdeed is retribution, education, or even doing nothing at all. At different times, different responses may be called for. What matters most to the consequentialist is not the payback but rather the outcome. The tension between retribution and consequentialism is a hot button issue in the field of neurolaw, where neuroessentialists argue that it is time to rethink the concept of punishment, while traditionalists suggest that deterrence remains the best way of organizing civil society. Continue reading
The trolley problem is a famous thought experiment in philosophy, and runs something like this.
A runaway trolley is hurtling down a track. Five people have been tied to the track directly in front of the trolley, but there is a switch which allows the trolley to be diverted to an alternative track where one person has been tied to the track. You are standing at the switch and see the disaster unfolding. What do you do? Most people answer that they would flip switch, killing one to save five – classic utilitarian thinking. The trolley problem has been embellished in a variety of interesting ways. The most famous of these is called the fat man problem: 5 people are tied to the track as before, but now there is a fat man on a bridge over the track, and if you push him off, he will fall before the train, stopping it and saving 5 people as before; of course, the fat man dies in the process. People who were willing to pull the switch to save 5 people tend to be reluctant to push the fat man off the bridge. Philosophers suggest that this reluctance is based upon deontological thinking, where one’s deep-seated values determine one’s actions rather than cool rational thought. Continue reading
By now, only the most fundamentalist of libertarians cling to the belief that humans behave as Homo Economicus, the mythical self-interested rational agent on which much of free market economic thinking is based. Studies of real humans behaving in the real world (or at least in research laboratory settings) have revealed that we all exhibit a variety of cognitive biases, and that these biases affect our decision-making in such a way that we regularly diverge from ‘rational’ behaviour.
If you wish, you could join the Less Wrong crowd who have, in recent years, been attempting to modify their own behaviours so that they conform better to pure rationality. Their reasons for doing so vary, but include something along these lines: perfect rationality results in the best approximation of a condition conducive to human happiness. At a minimum, rationality is less wrong than what we have now.
It is not hard to demonstrate irrational behaviour among humans. One of the more compelling ways to do so is to ask people to participate in the Ultimatum game. A darling of neuroeconomists, in the Ultimatum game both you and a partner are given a sum of money (for best results, real money is used), let’s say $20. Your partner is given the task of deciding how to divide the money between the two of you. Your task is to decide whether you accept the partner’s offer, in which case you both keep the money, or reject it, in which case you both get nothing. If the partner offers you $10, the decision is easy and you both get 50% of the winnings. But when the partner is viewed as acting unfairly, offering only $1 out of the $20 pot (5%), people often reject the offer. [Similar effects are seen if the pot is $200, but if the pot because sufficiently large - say a million dollars, most people say yes to an offer of only 5%.] The perfect rationalist idealized as homo economicus would never do such a thing – why, after all, would anyone turn down a free dollar? The Less Wrong crowd would take a moment to consider what cognitive biases might cause individuals to turn down a free dollar under such circumstances, and work to try and nullify them. Real people in the real world turn down unfair offers with regularity.
What sorts of cognitive biases cause people to spurn an offer of free money? In the Ultimatum game, it seems that the unfairness causes people to feel pangs of disgust, and this emotional response is thought to modify rational thinking. The phenomenon is also a form of altruistic punishment, and has long been thought to act as a sort of social glue: members of society punish people who act unfairly, even if they do so at a cost to themselves. Put into this context, it might make a bit of sense to act this way – perhaps rationality plays out not in the self-interested way that libertarian economists would have us believe, but rather in the buttressing of a social order way that, in the long run, serves the interests of everyone.
Or so the theory goes. Continue reading
Over at Pharyngula, PZ Myers’ reliably lively ScienceBlogs province, a recent post offers some incisive treatment of a philosophically arresting debate between Sam Harris and sundry interlocutors, most prominently Sean Carroll. The topic – whether science can answer moral questions, or, more perilously rendered, whether one can, in fact, derive “ought” from “is” – exerts a tidal attraction upon my blogging muscles, but I can resist for now; Myers’ relevance to this entry issues specifically from a choice bit of phraseology in his write-up.
When Harris claims that the discernment of human well-being (and hence of utility-maximizing courses of action) is a purely empirical matter, Myers finds him guilty of “smuggling in an unscientific prior in his category of well-being.” What I want to explore after the jump is the following possibility:
It may be that the developing body of work in neuroscience and psychology probing various morally charged phenomena has been smuggling in a politically loaded prior under the terminologically neutral guise of the category “prosocial behaviour.”
Although I presented my poster this morning, I’m not going to talk about it here. Rather, I’m going to briefly report and summarize an interesting poster I saw yesterday by graduate student Bradley Thomas, who is in Daniel Tranel‘s lab at the University of Iowa.
The poster was titled The Self-Other Bias in Moral Judgment is Insensitive to Ventromedial Prefrontal Damage. Thomas reported on study that aimed to examine whether the ventromedial prefrontal cortex (VMPFC) is crucial for creating moral judgments about both Self and Other dilemmas, whether the self-other bias of moral judgments about these dilemmas can be replicated, and whether this self-other bias is sensitive to VMPFC damage. This work comes right off the heels of some other writings in this domain, including a recent paper in Neuroethics by Thomas Nadelhoffer and Adam Feltz. Some of Tranel’s previous work with others such as Antonio Damasio and Marc Hauser support the notion that the VMPFC damage increases utilitarian judgments.
The authors recruited 3 groups for experimental study: a) neurologically ‘normals’; b) brain damage comparison; and c) adult-onset VMPC-lesions. They created a novel battery of 12 Self and 12 Other high-conflict personal moral dilemmas, based on the dilemma of the trolley problem. Typically when presented with the trolley-problem thought experiment, participants will endorse a simple utilitarian end – i.e., they will believe it is morally permissible to flip the switch (or pull the lever, depending on the version) to save 5 individuals and the trolley to run over 1 person.
Most notably among their findings was that individuals with VMPFC damage were most likely to endorse a utilitarian outcome in both Self and Other dilemmas. Accordingly, as the title suggests, the Self-Other bias was insensitive to VMPFC lesions. Thomas and his co-authors suspected that the bias does not appear to be created by the VMPFC and other complex emotional processing. The authors hypothesize that bias in moral judgment may be due to more basic psychological processes, such as an increased aversion to causing self-harm versus another person causing that harm. I wonder, despite the VMPFC damage, if the somatic marker hypothesis may be somewhat relevant here…
I am really looking forward to seeing more work of this kind in the future.