Altruism tested

Three Mile Island

In 1979, I was in graduate school in Philadelphia, a city in which I had been living for seven years. As is the case with most people of university age, I had a cadre of close friends for whom I would give the world. Or at least so I thought.

As it happens, I also had a cousin who lived in Philadelphia, with whom I had had a sometimes frosty relationship. Despite the fact that we lived within 10 blocks of each other, we saw each other no more than a couple of times a year, events that I believe we both approached with equal amounts of dread and delight.

In March of that year, a nuclear reactor at Three Mile Island, less than 100 miles from where I was, experienced a partial meltdown. There were a few tense days. I remember one morning in particular being sufficiently charged with anxiety that I packed my car with some provisions, planning to leave town if things did not improve soon. The question was, who was I going to take with me, as none of my close friends had cars of their own.

Actually, I didn’t even ask the question. I knew. I called my cousin and told her that I was leaving town that afternoon, and that if she wished, she (and her husband) could come with me. In the end, things calmed down just before we were set to leave, and Three Mile Island became a memorable blip in the history of the nuclear power industry. But for me, it taught me a deep lesson about the power of genetic relatedness and altruistic behaviour, one which has now been formally tested.
Continue reading


The Science of Compassion

Stephen Post, Professor of Preventive Medicine and the Director and Founder of the Center for Medical Humanities, Compassionate Care, and Bioethics at Stony Brook University will be speaking in Vancouver on Friday April 29th.  Sponsored by the Dalai Lama Centre for Peace and Education, Dr. Post will be speaking on “The Power of Giving, Compassion and Hope“.  As it happens, this week’s Big Think newsletter has a short video by Dr. Post, which I have inserted below.

Just watching it will improve your day.

Ultimate envy

By now, only the most fundamentalist of libertarians cling to the belief that humans behave as Homo Economicus, the mythical self-interested rational agent on which much of free market economic thinking is based.  Studies of real humans behaving in the real world (or at least in research laboratory settings) have revealed that we all exhibit a variety of cognitive biases, and that these biases affect our decision-making in such a way that we regularly diverge from ‘rational’ behaviour.

If you wish, you could join the Less Wrong crowd who have, in recent years, been attempting to modify their own behaviours so that they conform better to pure rationality.  Their reasons for doing so vary, but include something along these lines: perfect rationality results in the best approximation of a condition conducive to human happiness. At a minimum, rationality is less wrong than what we have now.

It is not hard to demonstrate irrational behaviour among humans. One of the more compelling ways to do so is to ask people to participate in the Ultimatum game. A darling of neuroeconomists, in the Ultimatum game both you and a partner are given a sum of money (for best results, real money is used), let’s say $20.  Your partner is given the task of deciding how to divide the money between the two of you. Your task is to decide whether you accept the partner’s offer, in which case you both keep the money, or reject it, in which case you both get nothing.  If the partner offers you $10, the decision is easy and you both get 50% of the winnings.  But when the partner is viewed as acting unfairly, offering only $1 out of the $20 pot (5%), people often reject the offer. [Similar effects are seen if the pot is $200, but if the pot because sufficiently large – say a million dollars, most people say yes to an offer of only 5%.] The perfect rationalist idealized as homo economicus would never do such a thing – why, after all, would anyone turn down a free dollar?  The Less Wrong crowd would take a moment to consider what cognitive biases might cause individuals to turn down a free dollar under such circumstances, and work to try and nullify them. Real people in the real world turn down unfair offers with regularity.

What sorts of cognitive biases cause people to spurn an offer of free money? In the Ultimatum game, it seems that the unfairness causes people to feel pangs of disgust, and this emotional response is thought to modify rational thinking. The phenomenon is also a form of altruistic punishment, and has long been thought to act as a sort of social glue: members of society punish people who act unfairly, even if they do so at a cost to themselves. Put into this context, it might make a bit of sense to act this way – perhaps rationality plays out not in the self-interested way that libertarian economists would have us believe, but rather in the buttressing of a social order way that, in the long run, serves the interests of everyone.

Or so the theory goes. Continue reading

21st century enlightenment

Another great video from RSA-Animate.  Matthew Taylor, Chief Executive of the Royal Society for the for the Encouragement of Arts, Manufactures and Commerce (RSA) explores the meaning of  21st century enlightenment. There are many worthy ideas here, and given the way understanding of the brain is highlighted, I was naturally smitten.  My favourite line: “21st century enlightenment calls for us to see past simplistic and inadequate ideas of freedom, of justice, and of progress.” [2nd place: “The moral and political critique of individualism now has an evidence base.” ]

Watch, and feel free to note your favourite (or most reviled) line in the comments.

On Cooperation

handshakeOne of the enduring questions of human existence relates to the tension between private and common interest. Often framed as the distinction between cooperation and individualism, it can be summarized as asking, “to what extent are my actions determined by my desire to pursue my own self-interest versus the interests of others.”  The dilemma was certainly recognized by Darwin, but has been the focus of several bursts of academic interest in the last 50 years.  In the 1960s, William Hamilton began to formalize the idea that altruism towards kin – those with whom we share some genetic heritage – made sense using the tools of evolutionary theory, and Richard Dawkins famously added a laser-like focus to this formalism with his selfish gene hypothesis.

But what are we to make of the fact that humans regularly help individuals who are not kin?  In the 1970s, Robert Trivers developed the notion of reciprocal altruism to explain such cooperative behaviour – essentially, if you help me I’ll help you.  Soon thereafter, game theory began to be applied to the paradigm, and has turned out to be an exemplary way of probing the tension between cooperation and selfish behaviour (a previous post dealt with game theory and reciprocal altruism).  In one prominent series of studies, Ernst Fehr and his colleagues have amassed a substantial body of data showing that the kind of large scale cooperative behaviour exhibited by humans is dependent primarily upon the threat of punishment: the tit for tat hypothesis.  Now, in a new paper in Science, Rand et al. challenge this view, showing that in a public goods game, positive interactions promote cooperation when repeated interactions are expected to occur.  From the abstract.

The public goods game is the classic laboratory paradigm for studying collective action problems. Each participant chooses how much to contribute to a common pool that returns benefits to all participants equally. The ideal outcome occurs if everybody contributes the maximum amount, but the self-interested strategy is not to contribute anything. Most previous studies have found punishment to be more effective than reward for maintaining cooperation in public goods games. The typical design of these studies, however, represses future consequences for todays actions. In an experimental setting, we compare public goods games followed by punishment, reward, or both in the setting of truly repeated games, in which player identities persist from round to round. We show that reward is as effective as punishment for maintaining public cooperation and leads to higher total earnings. Moreover, when both options are available, reward leads to increased contributions and payoff, whereas punishment has no effect on contributions and leads to lower payoff. We conclude that reward outperforms punishment in repeated public goods games and that human cooperation in such repeated settings is best supported by positive interactions with others.

This work from Nowak’s group reprises a theme that is important in considering neuroeconomic studies of human behaviour: it is important to model the behaviour as closely as possible on the real world conditions in which humans live (and thrive), while trying to limit confounding variables as laboratory experiments are wont to do. [For another take on the issue, see this paper in Nature from earlier in the year, also from Nowak’s group.  Also, there is an excellent commentary in the recent issue of Science on the origins of cooperation by Elizabeth Pennissi.]

The tension between private and common interest is certainly of interest to academics studying the evolution of social behavior, but it is also central to nearly every debate about political life in the modern world.  Examples abound (the current health care debate in the US is an obvious one), but as a citizen of both the US and Canada, one comment in a recent issue of The New Yorker by Adam Gopnik strikes me as particularly relevant.  The article is a profile of Michael Ignatieff, Leader of the Official Opposition and the Liberal party in Canada, and possibly the next Prime Minister of the country.  Because Ignatieff is both a politician and a political philosopher who spent 25 years abroad including a long stint on the faculty of Harvard, it was perhaps inevitable that Gopnik’s prose would wander into describing the unique brand of glue that holds together the country known as Canada.

We are not, and have never been, the Canadian collectivists argue – in conscious opposition to older Anglo-American traditions – the rational individuals of liberal contract theory.  No man is an island, and rules made for imaginary islands ignore the fragile ecology of the archipelago.  We are people who live in communities, and our sense of who we are derives from what the people around us are like.  To exalt the individual and his rights at the expense of nurturing the tenuous threads of togetherness leads to violence, alienation, political apathy, and the growth of crazy movements that can supply, in moonshine form, the sense of solidarity that pure “rights” liberalism can’t – the very traits that Canadians see in a nearby country, they name no names.

Hope for humanity after all

01-money-exchangeRobert Wright, author of such thoughtful books as A Moral Animal and The Evolution of God, has an op-ed piece in the Sunday New York Times entitled, “A Grand Bargain over Evolution“.  Without considering the merits of his contentious argument that moral laws exist in the some absolute sense and that we humans discover them as we go along (much like we have discovered the laws of thermodynamics), there was one part of his essay which caught my eye.

“There are plenty of evolutionary biologists who believe that evolution, given long enough, was likely to create a smart, articulate species — not our species, complete with five fingers, armpits and all the rest — but some social species with roughly our level of intelligence and linguistic complexity.

And what about the chances of a species with a moral sense? Well, a moral sense seems to emerge when you take a smart, articulate species and throw in reciprocal altruism. And evolution has proved creative enough to harness the logic of reciprocal altruism again and again.

Vampire bats share blood with one another, and dolphins swap favors, and so do monkeys. Is it all that unlikely that, even if humans had been wiped out a few million years ago, eventually a species with reciprocal altruism would reach an intellectual and linguistic level at which reciprocal altruism fostered moral intuitions and moral discourse?

There’s already a good candidate for this role — the chimpanzee.

Chimps, some primatologists believe, have the rudiments of a sense of justice. They sometimes seem to display moral indignation, “complaining” to other chimps that an ally has failed to fulfill the terms of a reciprocally altruistic relationship. Even now, if chimps are gradually evolving toward greater intelligence, their evolutionary trajectory may be slowly converging on the same moral intuitions that human evolution long ago converged on.”

Yes, chimps have a rudimentary sense of justice, including some version of reciprocal altruism.  But in a 2007 paper in Science, Jensen et al. showed that there are moral decisions that humans make that seem to be absent in chimpanzees.  The findings have implications both for our understanding of what makes humans unique, as well as for economic decision making.

The experiment is a variant of the ultimatum game, and demonstrates a form of altruism called altruistic punishment.  Each of the players in this game get to make one decision.  Player 1 is offered $100, and has to decide how much of this windfall will be shared with Player 2.  As you consider this decision, you are aware that Player 2 is aware of exactly how much money you received.   Once you make your offer (all, some, or none) to Player 2, they get to decide if they view it as fair.  If the conclusion is yes, both of you get to keep your winnings; if they view it as unfair, both of you forfeit the money.

The game has been played many times with humans, and regularly works out like this.  If Player 1 offers $50 to Player 2, everyone is happy and Player 2 accepts the deal.  If the offer is only $40, Player 2 may not be happy but will accept it.  But at some point (usually around $20), humans consider low-ball offers so unfair that they will reject it, even if it means that they get nothing.  The idea is that this reaction punishes Player 1 for their chintzy behavior, but it comes at a cost to Player 2; after all, isn’t $20 better than nothing?  Apparently, the trade-off that our brains make is between money and moral outrage, and at 20%, moral outrage wins out. Because Player 2 is giving something up to achieve this outcome, this phenomenon is called altruistic punishment, and is evident in humans from diverse cultures, suggesting that this form of ethical behavior arose sometime before we spread out and (over)inhabited the entire planet.  Moreover, it is considered to be one of the ways in which we enforce social cohesiveness in human interaction. [For those of you wondering how altruistic punishment might work in real life, imagine a scenario where you are walking down the street and see a small person getting beaten by a big bully.  Classical altruism theory suggests that you would not intervene unless the little guy is related to you – essentially, a variant of the selfish gene hypothesis.  But that is not what humans do – they do intervene, not always but often and even at risk of personal injury, particularly when they perceive the fight as unfair.  This is altruistic punishment in action.]

It turns out that chimpanzees do not exhibit altruistic punishment when playing the ultimatum game.  At least under the conditions that Jensen et al. utilized, chimps readily accepted 20% of the pot (which consisted of raisins instead of dollars).  There are important constraints that are worthy of further exploration (see for example Neiworth et al, 2009), but the interesting point is this: social evolution has conferred on humans more complex moral intuitions than are seen in non-human primates.  It is not known at what point in our cultural evolution altruistic punishment arose, but it seems to me entirely plausible that it is a product of the specific social conditions under which humans have lived for some time.  In the midst of all the handwringing over the (sadly, many) ways in which humans mistreat each other, it is heartening to know that our magnificent brains have developed ways in which to make life with our fellow man more reasonable.  It appears as if there is hope for humanity after all.