A recent study coming out of Penn’s psychology department is challenging two beliefs: one about a trendy technology, and one long-held assumption about control in the brain. Best of all, it’s done so by testing a grown-up equivalent of playtime.
To probe types of of goal-oriented thinking in the brain, the study used transcranial direct-current stimulation, or tDCS — the trendy technology in the picture. tDCS works by placing electrodes over the scalp to either inhibit or promote activity in a targeted part of the brain. Led by Sharon Thompson-Schill, who directs the Center for Cognitive Neuroscience, the research team used this tool to inhibit activity in subjects’ prefrontal cortex (PFC) while measuring their performance on tests with different goals.
The results? Inhibiting left PFC — an area known to act as a filter for irrelevant information in goal-directed tasks — improved subjects’ abilities to come up with uncommon uses for common objects. Much as children would think to use, say, a broken antennae as a magic wand for play, adult subjects had to suggest uses for an object like Kleenex besides the obvious one, blowing your nose. It turned out they were better at doing this when the cognitive filter in their left PFC was temporarily turned off by tDCS.
The importance of left PFC in this study brings us to the idea of cognitive control. Cognitive control refers to brain processes that allow us to focus on immediate goals — in other words, processes like the PFC filter. Such mechanisms help us ignore any information that may bombard us when we’re completing a task, but which has nothing to do with our goal and so only serves as a distraction. Cognitive control is thus helpful in the opposite version of the playtime task, where subjects had to ignore any information about Kleenex that didn’t reflect its typical purpose of collecting snot.
So the role of the left PFC here was two-faced: As it promoted focus, it sapped away imagination. And that’s where the key insights from this study arise.
For one thing, it highlights that the effects of tDCS cannot be categorized along the binary of “good” or “bad.” This comes as a surprise to those who have wanted to see tDCS as a sort of holy grail treatment, a view recently questioned in Slate and Wired.
It’s reasonably tempting to venerate this treatment. tDCS has mild physical side effects, usually just some scalp tingling or generally manageable discomforts like headaches. It also can be applied all over the scalp. This last feature makes it a possible treatment for a range of psychiatric disorders and a viable form of cognitive enhancement. But as this study shows, if stimulating some part of the brain helps with one type of task, it is likely to impede others. It looks like with the brain, as with most things, we can’t expect to get something for nothing.
That same conclusion applies to the study’s findings with respect to cognitive control, a topic Thompson-Schill has been studying for twenty years. She said the focus has always been on how cognitive control enhances performance rather than how it might hurt. “I feel like there’s just kind of an assumption that having cognitive control would make anything better — or at least, neutral or better,” she said.
She began to question this assumption after reviewing the developmental psychology literature, which has plenty to say about things young children do better than adults. One of those things is thinking outside the box, at least when it comes to simple objects. The literature also shows that kids don’t start developing cognitive control until age 3 or 4. Combine these observations together, and you’ve got a hypothesis in the tradition of evolutionary psychology: Maybe those childhood years without cognitive control aren’t just a hindrance. “I started asking, what if late cognitive control development is an adaptation?” Thompson-Schill said.
It’s hard to show that any cognitive feature is an adaptation per se. Even so, this study at least supports the idea that cognitive control — like tDCS — should be framed as a trade-off, rather than something with only positive effects.
As Thompson-Schill explained, this finding could further influence the way the concept of cognitive control gets applied, especially in education. Some researchers have advocated lesson plans to improve cognitive control, so as to heighten kids’ focus and other traits important for learning. But when such plans are implemented, it’s important to remember that we might not want cognitive control all the time — that imaginative thinking also contributes to learning.
After all, isn’t the real lesson from this study obvious? We all need playtime. And sometimes, even the most alluring tools and assumptions can get in the way of our play.
Imagine if bouncing back from an all-nighter could kill you. Such was the case, by analogy, for a group of roundworms recently studied by a team of researchers at Penn’s Perelman School of Medicine.
The full paper can be found here, but the main idea involved modeling sleep deprivation in the C. elegans worm. Researchers stimulated (i.e., poked) the worms awake during what would normally be a lethargic period. For control worms, this “deprivation” provoked a homeostatic sleep response — bodily adjustments to compel the worm to rest, like feelings of exhaustion in humans that are coordinated by the brain. For worms with a mutation blocking these adjustments, their response at first looked better than normal. They stayed awake through what was meant to be a rest period and seemed to avoid their equivalent of exhaustion. Except then, a bunch of them died.
You might say, hmm, that’s interesting enough — but this obviously wasn’t an “all-nighter” in a human sense of the phrase. These worms don’t even experience what we humans think of as being sleep. Yet as Penn’s chief sleep researcher, David Dinges, explained, simple animal models have often fostered insights into the mechanisms of human sleep. “You can’t just say this is irrelevant, that it’s just a worm hanging in a bubble of water. It’s highly relevant,” Dinges said.
In particular, he pointed out that mechanisms for sleep and energy conservation are dictated by features of our natural environment — like the sunlight/darkness cycle — that affect all manner of living things, from worms to people.
Given this connection, I didn’t feel completely ridiculous responding to this study by reconsidering the way humans gauge our own need for sleep. Just to tease out this idea, let’s anthropomorphize these worms a bit by thinking of them like people. The mutants would be those irritatingly efficient individuals who seem able to stay up all night and still function in the morning. But in this case, seemingly out of nowhere, the little guys met their maker. Even if the worms looked like they could manage without rest, they clearly couldn’t.
I wonder if something like this often happens with people. We might think we’ve recovered fine from a few nights of scant sleep after we’ve had some caffeine pick-me-ups or a good nap. We subscribe to the notion of “catching up”: going days or weeks at a time on unhealthy schedules, with the assumption our bodies will recover once we hit a weekend or vacation period when we can sleep in. Though these binges are meant to compensate for sleep we’ve lost, it looks like in many ways they fall short.
This is the not so rosy pictured outlined by psychology doctoral student Andrea Spaeth, who studies sleep and energy balance under Dinges. Spaeth noted that long-term sleep loss is associated with substantial weight gain and unhealthy eating behaviors. After someone sleeps in, or “recovers,” following a stint of deprivation, they typically just return to baseline levels of eating. That means they’ll have to work extra to make up for any damage already caused to their fitness during their time of insufficient sleep.
“Catching up” may also fail to restore function. Take someone who sleeps less than seven hours a night for multiple days and then binges with a 12-hour sleep session (in other words, many a college student). Even after that binge, performance on a range of cognitive tasks is still likely to be impaired.
So though the worm findings are interesting in their own right, they can also be a reminder: we can still need sleep even when we feel fine or think we’ve “caught up.” The stakes may not be quite as high for us as they were for the tragic C. elegans mutants. Still, the potential for chronic weight gain and decreased cognitive function is desirable for no one.
Hot off the presses! This month marks the publication of Neuroethics in Practice, a collection of articles on neuroethics in healthcare settings edited by the CNS’ own Anjan Chatterjee and Martha Farah. I won’t give an overview, since the editors have done that already. But from the book’s major topics — which include brain enhancement, competence and responsibility, imaging, brain damage and new treatments — one issue jumped out to demand more screen time: the regulation of neuroenhancement among young people.
Though enhancement among youth is a popular topic, the discussion rarely involves specific policy recommendations. But those are exactly what Ilina Singh and Kelly Kelleher offer in their article for Neuroethics in Practice. Their argument: given that neuroenhancement is already being used, it should become a clinical option for young people that is regulated by primary care providers.
The details can be found in a version of the paper printed here. More broadly, two aspects of this position stand out.
First, regardless of whether it is “correct,” this stance is an unusually practical response to concerns over the inevitability of some enhancement — concerns stemming from the simple fact that as long as these medications exist, some unimpaired individuals will get them. Singh and Kelleher move beyond the instinctual reaction of fretting to ask, how can we deal with this problem in everyday health care?
Second, addressing neuroenhancement in primary care could have implications for a key subtlety of this topic: as of now, neuroenhancement is not equally available to all young people. As Singh and Kelleher point out, enhancement is at least initially more likely to spread throughout well-resourced families and communities, particularly where students attend competitive secondary schools. And reports on collegiate use of stimulants in the U.S. have found it to be more common in the northeast and at schools with more competitive admissions standards.
In other words, current neuroenhancement among young people seems tied to cultures of competitive academics, certain definitions of success and expendable money to put toward pills. But primary care is a different story — all young people should be able to get check-ups, not just those at elite schools. In that sense, these proposals raise important questions and new possibilities for the cultural context of neuroenhancement.
So, the final takeaway on the Singh and Kelleher paper? As part of the conversation on enhancement, it adds something new. As a peek into Neuroethics in Practice, it should make you curious about the whole book.
So the Dalai Lama walks into a room full of neuroscientists…
That sentence sounds like the revamped start of a priest-rabbi-imam joke. But it’s the reality of the Mind & Life Institute — a nonprofit aiming to combine the first person experiences of contemplative practices like meditation with scientific studies of the brain.
Mind & Life has involved the Dalai Lama since its 1987 founding, and over the years the group has brought in other spiritual leaders, committed meditators and well-respected scientists to fulfill its mission. It supports research on a range of topics, from the effects of mindfulness training on attention and working memory in children, to yogic breathing and cognition, to interdisciplinary definitions of human spirituality.
I’ll admit it: when I first read about an organization applying Western neuroscience to contemplative practice, I was a little skeptical (or paranoid, depending how you see it). It’s easy for Western cultures to appropriate traditionally Eastern practices in ways that ring superficial — case in point, the yuppie yoga boom. If this were an instance of neuroscientists trying to “validate” practices that have been around for thousands of years, I can’t say I would have been on board.
But Mind & Life is not about validation. Rather, the goal seems to be bringing together experts from both sides of the collaboration so they can finds points of overlap and learn from each other — a crucial element for any program hoping to bridge neuroscience with other fields. Sure, some of the studies have sought and confirmed scientific evidence of health benefits to meditation, like helping to treat mental illness, reduce chronic pain and generally improve bodily health. But these studies don’t seem to be conducted from a standpoint of, “Well, science has to prove that any of this works.” That kind of approach probably couldn’t fly anyway, given the central involvement of the Dalai Lama and committed practitioners of meditation, many of whom are also working scientists.
As with any collaboration, Mind & Life is continually learning and changing. In the past year, for example, it launched a humanities and social sciences initiative to give more voice to the humanities in understanding the mind. In fact, the Institute’s perhaps necessarily changing identity makes it a great case study for exploring what it even means for neuroscience to exchange with other fields.
That question has been taken on by history and sociology of science professor John Tresch, who chairs the science, technology and society undergraduate major at Penn. Tresch has studied Mind & Life on-and-off for several years, publishing a paper last year on their summer program (which CNS Managing Director Denise Clegg has twice attended). When I spoke with Tresch about the Institute, he made some interesting big-picture points. They mostly centered on the idea that no kind of science interacts with other disciplines as a purely intellectual entity. Rather, science comes embedded in a culture, which can’t help but influence the ways it relates to other perspectives.
For example, as Tresch explained, meditation plays a different role in Western conceptions of Buddhism than it does in countries with strong Buddhist traditions. This difference may affect the religious connotation of meditation across cultures and, in turn, influence its perception in scientific communities. Science and contemplative practice also have different attitudes toward introspection. While introspection has long been central to meditation, science has alternatively viewed it as an important tool (á la William James) or a dirty word (á la John Watson).
Such issues don’t have to hold back Mind & Life, but they are worth including in the conversation. It doesn’t matter whether you’re curious about science, contemplative practice, both fields or even the just big questions about the mind. Either way, all sides to this project — and its many participants — are something to keep an eye on.