As promised, today’s post will recap the Monday night lecture hosted by Penn’s Philosophy, Politics, and Economics department and delivered by Brown professor Fiery Cushman, covering the cognitive mechanisms that (usually) stop us from killing each other. For me this topic hit particularly close to home, since the humble independent research I have conducted under the CNS focuses on moral reasoning. It’s a fun area, as we’ll come to see – and Cushman jazzed things up at the end with what he admitted were speculative but still interesting potential ties between moral reasoning and political beliefs.
Over the past decade or so, a big debate in moral reasoning research has been the extent to which we recruit emotional rather than cognitive processing in making moral decisions. Plenty of other researchers have allowed for ambiguity in addressing this question, but what stood out about Cushman’s take was that he focused on a different distinction than the one between emotion and cognition.
To get to Cushman’s explanatory framework, we can start with Phillipa Foot’s trolley problem, the classic scenario for testing moral reasoning. In the first version of this problem, you are told to imagine that you are standing before the wheel of a runaway trolley approaching a fork in the tracks. If you do nothing, the trolley will continue along its path to the left side and run over five workmen lying on the tracks. If you hit a switch on the dashboard, the trolley will turn right and hit a single workman on that side. The second version is similar, except this time you are standing on a bridge, and in order to block the runaway trolley from the five workmen you would have to push an overweight man off the bridge onto the tracks. In both situations, taking action would result in the same outcome, i.e. sacrificing one life to save five. Yet most people say they are OK with flipping the switch, not so much with pushing the overweight man to his doom.
If you’re focusing on the emotion-cognition distinction, you might say that the “fat man” version of the problem elicits an emotional response by evoking a connection between you and another human being, who you will feel personally connected to in a way that you wouldn’t necessarily feel toward a guy already lying on the tracks. And indeed, some neuropsychological evidence at least partially supports this interpretation. Patients with damage to their ventromedial prefrontal cortex (VMPFC) – a brain region associated with emotional processing, especially in social situations – are more likely to focus on consequences and sign off on decisions like pushing the fat man.
But as Cushman reminded us, it doesn’t exactly make sense to separate emotional and cognitive mechanisms. Emotions influence the personal values that help inform our cognitive assessment of moral problems, and cognitive analysis is required to translate emotional reactions into decisions. So Cushman has pushed back a level to look at moral reasoning styles that focus either on consequences or on the hypothetical acts themselves, and each style might be supported by distinct cognitive mechanisms. This division may ring some bells with respect to the philosophical distinction between consequentialism, associated with Jeremy Bentham and John Stuart Mill, and deontologism, associated with Immanuel Kant.
In conducting experiments that distinguish between outcome-centered and act-centered reasoning, this research does something really cool: it shows how lofty philosophical approaches to this topic might meaningfully line up with what people do and, more specifically, with what the brain does. Some examples: Cushman and colleagues found that subjects were more averse to simulating harmful acts that in reality have no bad outcomes than they were to imagining (and thus “witnessing”) scenarios that had objectively bad outcomes, like watching a football player break a leg. These results point to some intuitive act-centered moral processing. They also found an association between the VMPFC and averse reactions to committing morally disturbing acts, keeping emotion in the discussion but not necessarily as a counterpoint to cognition.
This all just scratches the surface of the research on this topic, and anyone hungry for more should check out Cushman’s lab site. If you need some enticing, let me just say that Cushman’s simulated harmful acts included using a hammer, a rock and a handgun to maim a prosthetic leg he wore in the lab, reminding subjects that since it wasn’t his real leg they should go at it as best they could. He also connected his research to the field of “killology,” or psychological aversion to killing, a term that merits some alone time with Google. As for political connections, Cushman mentioned a 2009 paper by researchers at UVA that found broad differences in moral reasoning styles for liberals and conservatives before sharing a study he conducted with Ivar Hannikainen and Ryan Miller at Brown. This latter study found a moderate association between conservative ideology and the act-centered view, and also between liberal ideology and the consequence-centered view.
So would Kant have been a right-winger and Mill a raging liberal? Nothing’s quite so clear cut when it comes to moral reasoning – but as Cushman’s lecture showed, the picture just keeps getting more interesting.