Thoughtfully engaging modernity

Jonathan Franzen’s diatribe article in The Guardiana preview of his forthcoming book The Kraus Project, is provocatively entitled What’s wrong with the modern world? Trotting out many standard objections to techno-utopianism, he particularly bemoans overuse of Twitter, Apple products generally, and even calls out Jeff Bezos as one of the four horsemen of the Apocalypse. But it is not the Apocalypse of the Bible to which he refers but rather the more personal apocalyptic crises that we all experience. He was introduced to this idea by the early 20th century Viennese cultural critic Karl Kraus, also known as The Great Hater, an individual with whom Franzen has been obsessed for a couple of decades. He explains that,

Kraus’s signal complaint – that the nexus of technology and media has made people relentlessly focused on the present and forgetful of the past – can’t help ringing true to me. Kraus was the first great instance of a writer fully experiencing how modernity, whose essence is the accelerating rate of change, in itself creates the conditions for personal apocalypse. Naturally, because he was the first, the changes felt particular and unique to him, but in fact he was registering something that has become a fixture of modernity. The experience of each succeeding generation is so different from that of the previous one that there will always be people to whom it seems that any connection of the key values of the past have been lost. As long as modernity lasts, all days will feel to someone like the last days of humanity.

Alexander Nazaryan is a bit dyspeptic himself in response to Franzen’s take on modernity, arguing that Franzen’s cri de coeur offers critique sans cure (Notably, Nazaryan offers no remedy either). Michael Jarvis is a bit more sympathetic to the neo-Luddism of Thomas Pynchon in his review of Bleeding Edge, the new novel by the famously reclusive author. Unlike Franzen who tells us that “Not only am I not a Luddite, I’m not even sure the original Luddites were Luddites.”, Pynchon is unabashed about his views on modernity. Like Franzen’s glorification of Kraus as The Great Hater, Pynchon is on the record as exalting Ned Lud – the original Luddite – as Badass. Pynchon argues that the 1779 movement known as Luddism was not a response to technology per se but rather class war, a reaction to the disenfranchisement of poor workers by modern machines. While there is a kernel of truth to his assertion, the mastery over technology that was wrought by the Industrial Revolution represents a cultural shift that is more than just concern for jobs. Tellingly, it was only a few decades later that Mary Shelley sat by the fireside at Villa Diodati, weaving the story that was to become the mainstay of every subsequent backlash to technology, Frankenstein, or the Modern Prometheus.

Frankenstein still touches a chord, but in today’s world it is decidedly unwise to rail against modernity. Not only will you be pilloried in the press, but even if people buy the argument they will remain in thrall to modern technology – it is just too seductive to ignore. Moreover, the rants miss the point. Personally, I want to engage with modernity and live a rich, juicy life that is authentic and true. In short, I want to flourish as a modern. The better question asks how we might embrace modern technology and do it thoughtfully. And here the best lesson comes from most surprising of sources: the Amish. This past January, I had a chance to have an extended conversation with Jamie Wetmore in his office at the Consortium for Science Policy & Outcomes at Arizona State University where he enlightened me about Amish attitudes towards technology. According to Jamey’s studies, the Amish are not anti-technology. Rather, they think often and deeply about how technology affects their values. For the Amish, these are closely interwoven with their religion, and so they choose to decline the adoption of technologies that conflict with their religious values. But the choice is active – they gather together and consider the pluses and minuses, and then collectively decide on a course of action. Those with a more secular take on the world (me!) may harbour a different set of values, but values we have, and it seems to me that is worth following the example of the Amish and ask how does modernity impact my value system? Posed in this way, the question naturally leads to answers that lack the crankiness of Franzen and Pynchon’s tirades, while providing a way to engage that is more thoughtful than the techno-utopian musings of their interlocutors: weigh your engagement with technology with your own personal values.

Ah, but you might say that knowing something is a problem and doing something about it are two different things. Small steps are often the most effective ways to modify behaviour, and here is one that might help. A common complaint about modern life is that in the middle of a conversation, someone glances at their computer or smartphone (are they even different anymore?), checking for what can best be described as I-don’t-know-what-but-something-might-be-new. The person who looks away is distracted; the one who was ignored is, well, ignored. Everyone acknowledges the problem. And everyone does it from time to time. So for the next three days, just do this: notice. Don’t chuck your technology out the window, and definitely don’t beat yourself up about it when you sneak a peek at some digital screen in the middle of a conversation. You might try practicing what the Buddhists suggest to do with any behaviour you want to forestall – get curious about it. What was being said when you looked away? How do you think the other person felt? How did you feel about the whole thing? Most of all, ask yourself whether your actions align with your values. If you want to have the exercise really bear fruit, make a note each time it happens – it could be on paper, or in some electronic file, but the simple act of jotting down what was going on when your mind drifted from present to virtual will help change your brain’s ingrained pattern of behaviour. At first it will be hard, awkward, and maybe even a bit uncomfortable. You will start out catching yourself checking your whatever in the middle of a conversation as frequently as before, but by the third day it will become a rarity. And you will be better for it.

When is it rational to be nudged?

Five years ago, Richard Thaler and Cass Sunstein published a thoughtful little book called Nudge in which they outlined a broad program for improving the outcomes of human decisions. Drawing on the maturity of the field of behavioural economics, Thaler and Sunstein outlined the myriad ways in which small changes in the environment can affect the choices we make. In the intervening years, interest on the part of governments in developing such programs has grown ever stronger. In Great Britain, the Conservative government of David Cameron established the Behavioural Insights Team in 2010, with Richard Thaler as advisor. Cass Sunstein was appointed Administrator of the White House Office of Information and Regulatory Affairs under Barack Obama, where, among other tasks, he developed government-wide regulations that nudge people in numerous ways, although exactly what was done has always been a bit under the radar. Now comes news that the US government is developing its own Behavioral Insights Team, and there is a call  for people with appropriate skills to join.

Nudging is not without its critics. Those with libertarian sensibilities are predictably outraged, even if Thaler and Sunstein described the program as an exercise in libertarian paternalism.The primary concern is that nudging infringes upon autonomy, which brings it directly into the sight lines of neuroethics. The key issues were recently summarized in a short article in The New Scientist by Evan Selinger.

Fair minded individuals may debate the degree to which the infringement upon autonomy engendered by nudges is problematic, but Gidon Felsen, Noah Castelo and I decided to take a different tack.  First of all, we reframed the issue, calling it Decisional Enhancement rather than nudging (that our reframe is, in and of itself, a bit of a nudge did not escape our notice). More importantly, we have begun to explore the question of how the public view the infringement of autonomy that decisional enhancement programs provide. Essentially, we wanted to explore the degree to which people are willing to trade autonomy for better outcomes. The results of our adventures in experimental neuroethics can be viewed in our recently published paper in Judgement and Decision Making.  One key insight is this: when people need help achieving their objectives in life, they are not loathe to give up a bit of autonomy. It does not appear to be the case that people are enthusiastic about giving up autonomy just because their objectives are aligned with the decisional enhancement program. Rather, it is when their objectives align with the program and they recognize that they are struggling with achieving that objective that the endorsement is most evident.  To put it into terms developed by Harry Frankfurt, it seems that autonomy violations are most acceptable when people recognize that their decisions are more likely to follow their lusty first-order desires – to overeat, to spend money foolishly, etc. – than their sober life objectives, what Frankfurt called second-order desires.  Viewed in this light, perhaps it is entirely rational to give up a bit of autonomy to live as we wish.

Communication with vegetative state patients: Dialogue or soliloquy?

By Ania Mizgalewicz and Grace Lee

The world first heard from Canadian Scott Routley this past week. Routley, who has been in a diagnosed vegetative state for the last 12 years, seemed to communicate to scientists via signals measures from blood flowing in his brain that he was not in pain. The finding caught the headline attention of major news sites and spurred vast public commentary. Comments ranged from fearful to hopeful about mind reading, clinical applications of technology, and the ability of this technology to allow patients to communicate their desires to live.

Leading neuroscientist Adrian Owen in London, Ontario, articulates that the technology currently allows patients to respond to yes or no questions, but may one day be used to aide in more interactive communication. Questions would center on daily living preferences, attempting to improve quality of life and health care.

The findings by Owen and his group are truly exciting and provide great hope to the historically neglected population of people with serious brain injuries. Here at the National Core for Neuroethics, we encourage more discussion about the ethical implications of this technology. Questions such as those surrounding decisions about end of life are far in the future. The focus at the present should thus remain on how to validate this technology to one day be used in the clinical setting. If clinical use will be feasible in the future, we would need to address questions about access to the technology and the impact that its availability would have on families of patients.

Great caution and restraint is needed when coupling this still emerging technology with concerns about mind reading, or clinical decision-making about end of life. Hype here unfairly detracts from the true value of this work. With one in five vegetative patients showing signs of consciousness in these studies, the focus should remain on improving their daily surroundings, providing them a means of communication, and supporting their family members. It should also spark a conversation on the effectiveness and validity of current clinical tests used to diagnose these patients at the bedside.

Top image: wellcome images / flickr
Bottom image: Noel A. Tanner / flickr

Neuroscience in the public sphere

Here at Neuroethics at the Core, we have been trumpeting the rise of neuroessentialist thinking in the eyes of the public for some time (here and here and here), and it represents one of the two pillars of my research program in neuroethics. In today’s issue of Neuron, there is a great paper by O’Connor et al. entitled “Neuroscience in the Public Sphere“. The  abstract sums it up rather well:

The media are increasingly fascinated by neuroscience. Here, we consider how neuroscientific discoveries are thematically represented in the popular press and the implications this has for society. In communicating research, neuroscientists should be sensitive to the social consequences neuroscientific information may have once it enters the public sphere.

There are a few points that I would like to highlight. First, as my graduate student Roland Nadler relayed in an email to me last night after we both had a first glance at the paper:

…this is a fantastic article from start to finish. Worth really savoring as an example of how to do the normative stuff well, and its lessons are important for us to avoid producing stuff that could be tarred as neurotrash. Particularly neat that they get the definition of neuroessentialism right. Their discussion of it near the end is trenchant. It makes it clear that some philosophical work needs to be done to save neuroessentialism from the pitfalls of essentialism tout court – as they rightly point out, the latter is some bad juju.
On the topic of neuroessentialism, I particularly liked their final paragraph:

Neuroscience does not take place in a vacuum, and it is important to maintain sensitivity to the social implications, whether positive or negative, it may have as it manifests in real-world social contexts. It appears that the brain has been instantiated as a benchmark in public dialogue, and reference to brain research is now a powerful rhetorical tool. The key questions to be addressed in the coming years revolve around how this tool is employed and the effects this may have on society’s conceptual, behavioral, and institutional repertoires.

Not only do O’Connor et al. provide thoughtful normative comments, they also carried out empirical work, employing content analysis to study the themes that arise most frequently in the popular press. At the top of the list is enhancement of the brain, which represented 28.3% of the articles retrieved from the LexisNexis database. As this just so happens to be the other pillar of my research program, how could I not like this paper?

Excellent stuff.

Use it or lose it

As the technology of memorializing dialogue (in stone, no less) came into vogue, Socrates famously admonished Phaedrus his protegé Plato on its dangers: if people are able to write everything down, their ability to remember what was said will diminish. Plato, being an early version of an early adopter, memorialized the debate, and that is why the apocryphal story is with us today. But even without a grounding in modern neurobiology, Socrates had a valid point: the plasticity of our brains are such that the less we use them for a given function, the more our ability to carry out that function is impaired.

This becomes a tricky issue when thinking about the world in which we live today. In a thoughtful essay over at The Atlantic, Evan Selinger reviews a number of arguments for and against the use of ‘apps’ to make us, as he puts it in his title, a better person. What Evan is particularly concerned with are digital willpower enhancements: the suite of technologies that have been developed to help us do everything from not being distracted by a tweet to refrain from eating more than we would like. Continue reading

Graphic Warnings on Cigarettes: Nudge or Shove? A Neuro-Perspective

Although the topic of cigarette packaging regulation may not leap immediately to mind when one thinks “neuroethics,” this Bob Greene opinion piece over at CNN nonetheless touched off a stimulating discussion among some of us at the Core recently. The neuroethics connection, in fact, struck us as quite natural: our group has researched (and blogged about) the ethics of “nudging” frequently of late, and, as I worded it when I first emailed the article around, “certainly the images at issue here are a kind of behavioural nudge.” The question that we grappled with was whether the kind of nudge that the graphic warning labels provide is warranted in the case of cigarettes. And, indeed, that discussion called my original characterization into question. Do these labels truly constitute a nudge – a subtle biasing technique that makes a particular option more cognitively accessible than another while preserving the freedom to choose between them – or are they something more akin to a “shove?”

One of the least gruesome of the proposed images for cigarette packs.

As with any highly politicized issue, the question of whether cigarettes ought to be labeled with disturbing imagery is likely to be filleted into oblivion by pundits, bloggers, legal experts, economists, et cetera, et cetera. All I hope to do here, then, is sketch some ways in which the view from neuroethics – informed as it is by philosophy and the cognitive sciences – can shed some interesting and hopefully useful light on the question. Continue reading

TDCS does not reduce the authenticity objection

In an essay in recent issue of Current Biology, a team of neuroscientists and philosophers examine the neuroethics of transcranial direct current stimulation (TDCS), a relatively inexpensive means of modifying human brain activity that is touted as potentially being at the forefront of a new wave of cognitive enhancement. The article has garnered a great deal of interest in the press (for example here and here and here), and the reasons are unsurprising: the prospect of a device that is cheap (probably), safe (maybe), and effective (time will tell) is something akin to the holy grail of cognitive enhancement. If the initial claims for TDCS hold up, the device may have an impact the practice of enhancement in the relatively near term. As a result, the urgency with which our community must think through the relevant ethical issues intensifies. Continue reading