Meditation really seems to work as a peacekeeper—even between itself and the seemingly opposite discipline that is scientific research.
Or at least, that’s the picture that came out of an April 25 talk given by CNS Managing Director Denise Clegg on the benefits of mindfulness. I’ve written on the burgeoning relationship between contemplative practice and neuroscience before. But as part of this year’s Philadelphia Science Festival, this talk gave a fresh look at the topic by unpacking what exactly scientific studies could have to say about meditation and related practices.
At first glance, the two fields seem to operate on different planes. Where one strives for objectivity, the other goes for heightened consciousness of subjective experiences. But Clegg and her colleague Ilene Wasserman broke right through that surface-level opposition with a plenty of findings from neuroscience that probe how meditation might change the brain, what level of impact it can have and on whom.
In one study Clegg mentioned, eight weeks of regular meditative practice was associated with greater activation in left-sided anterior brain regions that have been linked to positive emotions. This activation pattern also predicted a better response to flu vaccine in subjects who meditated. Looking at a group of especially stressed individuals, another study found an association between an eight-week mindfulness program and reduced gray matter density in the amygdala. This finding pointed to one way meditation might literally shape the brain.
Other studies took on clinical issues, like examining meditative practice as a potential treatment for ADHD. Another looked at the ways positive emotions, mediated by mindfulness, can promote healthy outcomes like a bolstered sense of purpose.
Whatever the topic, Clegg emphasized that these studies should be viewed as “promising, but preliminary.” Many had fairly small sample sizes, i.e. around two dozen subjects. And since meditation asks individuals to focus on their own bodies, many of its effects may be specialized. Within these limitations, what these studies really do is probe at a collaborative field that is relatively new — making it both speculative and exciting.
Clegg and Wasserman also kept the audience from losing sight of the heart of the field: meditative practice itself. Guided by Clegg, we tried breathing techniques, shifting our attention and the technique of loving kindness meditation. That gave us a flavor for the activities that may drive the positive outcomes we discussed. (Anyone interested in learning more about mindfulness practice can start here.)
Even practiced briefly, these techniques had a calming effect. And who knows—if you do them enough, they might, say, shrink your amygdala and your fear responses.
In a lecture that touched on brainwashing and false confessions, on the government-led administration of hallucinogens to soldiers and lie detection, what most stood out was nature’s favorite love serum. It turns out that oxytocin — the same hormone secreted after sex and during nursing — might have a role in military intelligence.
As part of Jonathan Moreno’s Monday night neuroethics class for bioethics Masters students at Penn, CNS director Martha Farah spoke on the overlap between brain research and national security, a topic Moreno has written on extensively. Biological warfare is fascinating as a whole, but Farah’s punch line on the potential use of oxytocin in interrogation was especially interesting because it gives a new twist on our picture of the famous “love hormone.”
Oxytocin’s got a warm and fuzzy reputation, hence the warm and fuzzy nickname. It can act as a neurotransmitter and has diffuse effects on the body and brain, though its best-known effects involve bonding. It helps facilitate breastfeeding and uterine contraction during labor, and as a neuromodulator it has been linked to maternal behaviors, monogamy among prairie voles, romantic attachment in humans and social trust. As Farah explained, it’s that last point that caught the attention of security-minded folks and provoked the question: Could oxytocin play a role in interrogation? If you administer the love hormone, will it induce a state of trust and fondness that makes someone in the hot seat spill?
Farah reviewed research from the past decade pointing to this possibility. In one experiment, observers blinded to the study’s focus rated fathers who had been administered oxytocin as more attentive and communicative in playing with their children than fathers in the control group. Another study found that oxytocin made participants more generous and forgiving in economic games involving trade-offs in gains, while a third suggested that oxytocin encourages information-sharing. As of now, the most effective way to administer oxytocin is intransally, meaning you can’t quite give it to people without their noticing. Regardless, some view oxytocin as a promising pathway to gentler interrogation, replacing physical or psychological pressure with the equivalent of a mental love tap.
Of course, this idea comes with cautionary footnotes. Farah explained that oxytocin may have a “dark side,” in that if individuals are dealing with someone they classify as outside their social group, increased oxytocin levels may actually make them more hostile. Aside from raising questions about oxytocin’s broader evolutionary role, this effect could spell trouble in security contexts. Greater hostility between parties is never desirable, but more specifically, individuals being interrogated often passionately separate themselves from the groups doing the questioning. The very fact of being interrogated might drive that individual to feeling like a group outsider. In such cases, oxytocin’s dark side would fan already considerable flames.
And a number of ethical objections can be proposed, including the possible abuse of these methods and “slippery slope” consequences of using more brain state manipulation. There’s also the what-does-this-all-say-about-humanity concern that it’s wrong to play on the virtue of kindness to benefit the dirty work of interrogation. Any suggestion of “mind control” can be unsettling, even (or perhaps especially) when it involves something as innocent-sounding as the love hormone.
If the potential hijacking of love by war is a bit too much for you, check out some recent research into the role of oxytocin in sports. As for ways in which this hormone’s versatility may point to connections between sports, sex and war, that’s a topic for another day. Or maybe even another whole month.
As promised, today’s post will recap the Monday night lecture hosted by Penn’s Philosophy, Politics, and Economics department and delivered by Brown professor Fiery Cushman, covering the cognitive mechanisms that (usually) stop us from killing each other. For me this topic hit particularly close to home, since the humble independent research I have conducted under the CNS focuses on moral reasoning. It’s a fun area, as we’ll come to see – and Cushman jazzed things up at the end with what he admitted were speculative but still interesting potential ties between moral reasoning and political beliefs.
Over the past decade or so, a big debate in moral reasoning research has been the extent to which we recruit emotional rather than cognitive processing in making moral decisions. Plenty of other researchers have allowed for ambiguity in addressing this question, but what stood out about Cushman’s take was that he focused on a different distinction than the one between emotion and cognition.
To get to Cushman’s explanatory framework, we can start with Phillipa Foot’s trolley problem, the classic scenario for testing moral reasoning. In the first version of this problem, you are told to imagine that you are standing before the wheel of a runaway trolley approaching a fork in the tracks. If you do nothing, the trolley will continue along its path to the left side and run over five workmen lying on the tracks. If you hit a switch on the dashboard, the trolley will turn right and hit a single workman on that side. The second version is similar, except this time you are standing on a bridge, and in order to block the runaway trolley from the five workmen you would have to push an overweight man off the bridge onto the tracks. In both situations, taking action would result in the same outcome, i.e. sacrificing one life to save five. Yet most people say they are OK with flipping the switch, not so much with pushing the overweight man to his doom.
If you’re focusing on the emotion-cognition distinction, you might say that the “fat man” version of the problem elicits an emotional response by evoking a connection between you and another human being, who you will feel personally connected to in a way that you wouldn’t necessarily feel toward a guy already lying on the tracks. And indeed, some neuropsychological evidence at least partially supports this interpretation. Patients with damage to their ventromedial prefrontal cortex (VMPFC) – a brain region associated with emotional processing, especially in social situations – are more likely to focus on consequences and sign off on decisions like pushing the fat man.
But as Cushman reminded us, it doesn’t exactly make sense to separate emotional and cognitive mechanisms. Emotions influence the personal values that help inform our cognitive assessment of moral problems, and cognitive analysis is required to translate emotional reactions into decisions. So Cushman has pushed back a level to look at moral reasoning styles that focus either on consequences or on the hypothetical acts themselves, and each style might be supported by distinct cognitive mechanisms. This division may ring some bells with respect to the philosophical distinction between consequentialism, associated with Jeremy Bentham and John Stuart Mill, and deontologism, associated with Immanuel Kant.
In conducting experiments that distinguish between outcome-centered and act-centered reasoning, this research does something really cool: it shows how lofty philosophical approaches to this topic might meaningfully line up with what people do and, more specifically, with what the brain does. Some examples: Cushman and colleagues found that subjects were more averse to simulating harmful acts that in reality have no bad outcomes than they were to imagining (and thus “witnessing”) scenarios that had objectively bad outcomes, like watching a football player break a leg. These results point to some intuitive act-centered moral processing. They also found an association between the VMPFC and averse reactions to committing morally disturbing acts, keeping emotion in the discussion but not necessarily as a counterpoint to cognition.
This all just scratches the surface of the research on this topic, and anyone hungry for more should check out Cushman’s lab site. If you need some enticing, let me just say that Cushman’s simulated harmful acts included using a hammer, a rock and a handgun to maim a prosthetic leg he wore in the lab, reminding subjects that since it wasn’t his real leg they should go at it as best they could. He also connected his research to the field of “killology,” or psychological aversion to killing, a term that merits some alone time with Google. As for political connections, Cushman mentioned a 2009 paper by researchers at UVA that found broad differences in moral reasoning styles for liberals and conservatives before sharing a study he conducted with Ivar Hannikainen and Ryan Miller at Brown. This latter study found a moderate association between conservative ideology and the act-centered view, and also between liberal ideology and the consequence-centered view.
So would Kant have been a right-winger and Mill a raging liberal? Nothing’s quite so clear cut when it comes to moral reasoning – but as Cushman’s lecture showed, the picture just keeps getting more interesting.