Wednesday, September 26, 2012

TED Talks - 12 Talks on Understanding the Brain


TED has featured a LOT of really cool talks on the brain over the years - and these are 12 of their top picks, although not necessarily mine. Still, there are some excellent talks here, including VS Ramachandran, Michael Merzenich, Sarah-Jayne Blakemore, and Oliver Sacks.

12 talks on understanding the brain


Read Montague is interested in the human dopamine system — or, as he puts it in this illuminating talk from TEDGlobal 2012, that which makes us “chase sex, food and salt” and therefore survive. 

Specifically, Montague and his team at the Roanoke Brain Study are interested in how dopamine and valuation systems work when two human beings interact with each other. Twenty years ago, studying a topic like this was all but impossible because scientists relied on worms and rodents for insight into the brain. But today, in addition to animal research, neurobiologists have at their disposal functional MRI (fMRI), which allows them to make “microscopic blood flow movies” and map the activity of human brains in action. 

“We have a behavioral superpower in our brain and it at least in part involves dopamine,” says Montague in this talk. “We can deny any instinct we have for survival for an idea. No other species can do that.” 

So how do we assign value to ideas, process the gestures of those around us, make complicated decisions, and create informed judgments about each other? Montague’s lab hopes to discover much more about how these processes work by “eavesdropping” on the brains of 5,000 to 6,000 participants all over the world as they play negotiation games. It’s fascinating research that could tell us more about our social nature. Because as Montague says, “You often don’t know who you are until you see yourself in interaction with people who are close to you, people who are enemies to you, and people who are agnostic to you.” 

To hear much more about Montague’s work, watch this talk. And after the jump, hear insights from 11 others who are working hard to give a clearer picture of how our brains work. 

Allan Jones: A map of the brain Curious to see what a real human brain looks like? Watch this talk from Allan Jones, the CEO of the Allen Institute for Brain Science, given at TEDGlobal 2011. In it, he describes the Institute’s work to map brain function in the same detailed way that we map cities, investigating how the 86 billion neurons in the brain work together. (Read this great article in Forbes magazine about Paul Allen, the Microsoft cofounder who spent more than $500 million creating the Allen Institute.) 

Gero Miesenboeck reengineers a brain Optogeneticist Gero Miesenboeck has a different approach for understanding the brain — rather than recording the activity of neurons, he works backwards, seeking to control them. In this talk from TEDGlobal 2010, Miesenboeck explains his work manipulating neurons in fruit flies to see what happens when the brain’s code is broken. 

Daniel Wolpert: The real reason for brains Why do we have brains in the first place? Neuroscientist Daniel Wolpert hypothesizes that the human brain didn’t evolve to think or to feel, but to control movement. In this talk from TEDGlobal 2011, Wolpert shows how perception creates graceful, agile human movement. 

Jill Bolte Taylor’s stroke of insight Brain researcher Jill Bolte Taylor got a new view of the miraculous functioning of the brain when she had a massive stroke. In this powerful talk from TED2008, she describes feeling powerless as her brain functions shut down, and talks about her recovery. 

VS Ramachandran: 3 clues to understanding your brain The human brain may be a “three pound mass of jelly,” but it can “contemplate the meaning of infinity.” In this talk given at TED2007, neurologist VS Ramachandran explains his work to understand basic brain function, delving into three delusions that happen when brain activity goes awry. 

Michael Merzenich: Growing evidence of brain plasticity The brain is constantly able to change and adapt. In this talk from TED2004, neuroscientist Michael Merzenich describes the brain’s ability to re-wire itself, and shows why this elasticity is so meaningful. 

Sarah-Jayne Blakemore: The mysterious workings of the adolescent brain Cognitive neuroscientist Sarah-Jayne Blakemore studies the brains of teenagers because, rather than being fully developed, the organ continues to build through a person’s 20s and 30s. In this talk from TEDGlobal 2012, Blakemore shows why teenagers are more impulsive and more prone to feeling embarrassed than their adult counterparts. 

Henry Markram: A brain in a supercomputer There may be 100,000,000,000,000 synapses in the human brain, but their functioning can be understood. In this talk from TEDGlobal 2009, neuroscientist Henry Markam explains how a supercomputer can help model the brain. 

Christopher deCharms: A look inside the brain in real time Can you see how you feel? Yes, using fMRI. In this fast-paced talk from TED2008, neuroscientist and inventor Christopher deCharms shows how the brain can be viewed in real time using this amazing technology. 

Charles Limb: Your brain on improv Charles Limb is a surgeon who studies creativity, and is fascinated by how people create music. In this fun talk from TEDxMidAtlantic, Limb shows his work putting jazz musicians and rappers in fMRIs to see what happens when they improvise. (Read the TED Blog’s Q&A with Limb here.) 

Oliver Sacks: What hallucination reveals about our minds When we see with our eyes, we also see with our brains. But sometimes, the two do not match up. In this talk from TED2009, neurologist Oliver Sacks describes Charles Bonnet syndrome, which leads visually impaired people to experience lucid visual hallucinations. From there, he shows what this teaches us about normal brain function.

Tuesday, September 25, 2012

Building Character—Resilience, Optimism, Perseverance, Focus—To Help Poor Students Succeed

Yes, yes, yes. Thomas Toch at The Washington Monthly reviews How Children Succeed: Grit, Curiosity, and the Hidden Power of Character by Paul Tough. We have to deal with the psychological and emotional impact of poverty and violence if we want the kids who struggle most to succeed.

First-Rate Temperaments

Liberals don’t want to admit it, and conservatives don’t want to pay for it, but building character—resilience, optimism, perseverance, focus—may be the best way to help poor students succeed.

By Thomas Toch

How Children Succeed: Grit, Curiosity, and the Hidden Power of Character
by Paul Tough
Houghton Mifflin Harcourt, 256 pp.


When Barack Obama campaigned for the White House four years ago, Democrats and their allies in education policy circles were embroiled in a fierce debate over how best to improve the educational performance of the millions of K-12 students living in poverty.

One camp, a coalition of researchers and educators formed by the Economic Policy Institute, a liberal Washington think tank, argued in a manifesto called A Broader, Bolder Approach to Education that tackling poverty’s causes and consequences was the way to free disadvantaged students from the grip of educational failure. “Schools can ameliorate some of the impact of social and economic disadvantage on achievement,” the coalition wrote. But, it continued, “[t]here is no evidence that school improvement strategies by themselves can substantially, consistently, and sustainably close these gaps.”

In sharp contrast, a second reform group, led by then school superintendents Joel Klein of New York and Michelle Rhee of Washington, D.C., and others drafted a competing reform manifesto under the auspices of an organization known as the Education Equity Project that stressed tougher accountability for schools and teachers, governance reforms for failing schools, and the expansion of charter schools. They largely refused to acknowledge that poverty rather than school quality was the root cause of the educational problems of disadvantaged kids, for fear that saying so would merely reinforce a long-standing belief among public educators that students unlucky enough to live in poverty shouldn’t be expected to achieve at high levels — and public educators shouldn’t be expected to get them there.

While one of the few reformers with feet in both camps, Chicago schools superintendent Arne Duncan, was named U.S. secretary of education, the Klein cabal won the policy fight. The Obama agenda has focused almost exclusively on systemic school reform to address the achievement deficits of disadvantaged students: standards, testing, teacher evaluations, and a continued, if different, focus on accountability. The administration’s one education-related poverty-fighting program, Duncan’s Promise Neighborhoods initiative, is a rounding error in the Department of Education’s budget.
Duncan was right to align himself early on with both Democratic factions. Good schools can, of course, make a difference in student achievement just by being good. And the inadequate nutrition, housing, language development, and early educational experiences that many impoverished students suffer are real barriers to learning.

But in the last several years a new body of neuroscientific and psychological research has made its way to the surface of public discourse that suggests that the most severe consequences of poverty on learning are psychological and behavioral rather than cognitive. The lack of early exposure to vocabulary and other cognitive deficits that school reformers have stressed are likely no more problematic, the research suggests, than the psychological impact of growing up in poverty. Poverty matters, the new work confirms, but we’ve been trying to address it in the wrong way.

Former New York Times Magazine editor Paul Tough brings this new science of adversity to general audiences in How Students Succeed: Grit, Curiosity, and the Hidden Power of Character, an engaging book that casts the school reform debate in a provocative new light. In his first book, about the antipoverty work of the Harlem Children’s Zone, Tough stressed the importance of early cognitive development in bridging the achievement gap between poor and more affluent students. In How Students Succeed, he introduces us to a wide-ranging cast of characters—economists, psychologists, and neuroscientists among them — whose work yields a compelling new picture of the intersection of poverty and education.

There’s James Heckman, a Nobel Prize-winning economist at the University of Chicago, who found in the late 1990s that students who earned high school diplomas through the General Educational Development program, widely known as the GED, had the same future prospects as high school dropouts, a discovery that led him to conclude that there were qualities beyond courses and grades that made a big difference in students’ success. His inclinations were confirmed when he dug into the findings of the famous Perry Preschool Project. In the early days of the federal War on Poverty in the 1960s, researchers provided three- and four-year-olds from impoverished Ypsilanti, Michigan, with enriched preschooling, and then compared their life trajectories over several decades with those of Ypsilanti peers who had not received any early childhood education.

The cognitive advantages of being in the Perry program faded after a couple of years. Test scores between the two groups evened out, and the program was considered something of a failure. But Heckman and others discovered that years later the Perry preschoolers were living much better lives, including earning more and staying out of trouble with the law. And because under the Perry program teachers systematically reported on a range of students’ behavioral and social skills, Heckman was able to learn that students’ success later in life was predicted not by their IQs but by the noncognitive skills like curiosity and self-control that the Perry program had imparted.

Tough presents striking research from neuroendocrinology and other fields revealing that childhood psychological traumas — from physical and sexual abuse to physical and emotional neglect, divorce, parental incarceration, and addiction, things found more often (though by no means exclusively) in impoverished families — overwhelm developing bodies’ and minds’ ability to manage the stress of events, resulting in “all kinds of serious and long-lasting negative effects, physical, psychological, and neurological.”
There’s a direct link between the volume of such trauma and rates of heart disease, cancer, alcoholism, smoking, drug use, attempted suicide — and schooling problems. As Tough writes, Children who grow up in stressful environments generally find it harder to concentrate, harder to sit still, harder to rebound from disappointment, and harder to follow directions. And that has a direct effect on their yearperformance in school. When you’re overwhelmed by uncontrollable impulses [caused in part by disrupted brain chemistry] and distracted by negative feelings, it’s hard to learn the alphabet.
In particular, such stressors compromise the higher order thinking skills that allow students to sort out complex and seemingly contradictory information such as when the letter C is pronounced like K (what psychologists call “executive functioning”), and their ability to keep a lot of information in their heads at once, a skill known as “working memory” that’s crucial to success in school, college, and work.

The good news, Tough reports, is that studies reveal that the destructive stressors of poverty can be countered. Close, nurturing relationships with parents or other caregivers, he writes, have been shown to engender resilience in children that insulates them from many of the worst effects of a harsh early environment. “This message can sound a bit warm and fuzzy,” Tough says, “but it is rooted in [the] cold, hard science” of neurological and behavioral research, though such nurturing is often in short supply in broken, impoverished homes (and even in many intact households and communities).

As important, Tough contends, is research demonstrating that resilience, optimism, perseverance, focus, and the other noncognitive skills that Heckman and others have found to be so important to success in school and beyond are malleable—they can be taught, practiced, learned, and improved, even into adulthood. Tough points to the work of Martin Seligman, a University of Pennsylvania psychologist and author of Learned Optimism, and Stanford psychologist Carol Dweck, whose research has demonstrated that students taught to believe that people can grow intellectually earn higher grades than those who sense that intelligence is fixed. This commitment to the possibility of improvement, Seligman, Dweck, and others contend, invests students with the ability to persevere, rebound from setbacks, and overcome fears.

Psychologist Angela Duckworth, a protege of Seligman’s, has done a range of studies—on college students with low SAT scores, West Point plebes, and national spelling bee contestants, among others—and has found that a determined response to setbacks, an ability to focus on a task, and other noncognitive character strengths are highly predictive of success, much more so than IQ scores.

That’s why some of the schools in the highly regarded KIPP charter school network have added the teaching of such skills to their curricula. And they’ve coupled their traditional academic report cards with ”character report cards” developed by KIPP cofounder Dave Levin, Duckworth, and others. Concerned about their students’ inability to make it through high school and college even though they’re prepared academically, they grade students on self-control, gratitude, optimism, curiosity, grit, zest, and social intelligence. Other experts add conscientiousness, perseverance, work habits, time management, and an ability to seek out help to the list of key nonacademic ingredients of success in school and beyond. Students from impoverished backgrounds need such skills in larger doses, Tough argues, because they often lack the support systems available to more affluent students.

To Tough, the logic of the importance of noncognitive qualities to students’ futures is clear: we need to rethink our solutions to the academic plight of impoverished students. The studies of Dweck, Duckworth, and others support conservative claims that individual character should be an important part of policy discussions about poverty. “There is no anti-poverty tool that we can provide for disadvantaged young people that will be more valuable that character strengths,” Tough writes, a claim that won’t be easy for liberals to stomach.

But, Tough adds, the contributions of character traits to students’ success goes a long way toward refuting conservative “cognitive determinists” like Charles Murray, who claim that success is mainly a function of IQ and that education is largely about sorting people and giving the brightest the chance to take full advantage of their potential.

The research that Tough explores also undercuts claims by Klein, Rhee, and other signers of the Education Equity Project manifesto that we can get impoverished students where they need to be educationally through higher standards, stronger teachers, and other academic reforms alone.

What we need to add to the reform equation, Tough argues, is a system of supports for children struggling with the effects of the trauma and stress of poverty. He urges the creation of pediatric wellness centers and classes that help impoverished parents build the emotional bonds with their young children that are so important to the development of children’s neurological and psychological defenses against poverty’s ravages. He supports KIPP’s efforts to engender resilience, persistence, and other character strengths in its students, both in school and then beyond through support programs like KIPP Through College. Work by David Yeager of the University of Texas at Austin and others have shown that even modest interventions, like teachers writing encouraging notes on student’ essays, motivate children to persevere academically.

Above all, Tough makes a compelling case for giving poverty greater prominence in the education policy debate. Republican presidential hopeful Mitt Romney has talked mostly about school choice and states’ rights in education, playing to conservatives and Catholics, as every GOP candidate since Ronald Reagan has done. But the new science of adversity could be the basis of a compelling reform agenda in a second Obama term—one that merges the competing progressive agendas of the last presidential election cycle.


If you are interested in purchasing this book, we have included a link for your convenience.

Tim Ingold - The Social Brain Hypothesis

I found these videos (all seven are embedded in this frame) of Tim Ingold lecturing about the social brain hypothesis at PLoS Blogs' Neuroanthropology. The blogger there, daniel.lende, does an excellent job of arguing against Robin Dunbar’s Social Brain Hypothesis, and offering own explanation of how the human brain really is social - “I’ll attempt to show that the brain is social because life is.”

To read Lende's post, check it out at his blog.


Tim Ingold - The Social Brain

Are You as Moral as Your Baby?

This article from Big Think was posted back in August, a pretty good look at the moral life of babies by Sam McNerney. The research indicates that babies are born with a basic moral sense of right and wrong, fair and unfair, but that how they learn to apply these moral frames depends on the cultural/social construction they create through experience.

The Moral Worldview of Babies

Sam McNerney on August 22, 2012, 1:38 PM

Shutterstock_61687246
“[T]he Author of Nature has determin’d us to receive… a Moral Sense, to direct our Actions, and to give us still nobler Pleasures.”

That appeal was made in 1725 by Scottish philosopher Francis Hutcheson, and it captured one side of a debate that tries to answer the question: Where does morality come from? On the other side were thinkers like John Locke and Thomas Hobbes who believed that morality is the product of experience. That was the extent of the discourse for most of history; morality was either prepackaged or learned. End of story.

Recent psychological research tells us the answer is somewhere in the middle. Yes, babies come into the world predisposed with a set of moral intuitions – morality can’t be entirely self-constructed. But what babies consider right and wrong and which moral intuitions they value, develops from experience. As social psychologist Jonathan Haidt puts it: “We’re born to be righteous, but we have to learn what, exactly, people like us should be righteous about.”

Let’s look at some research. Consider a paper published earlier this year by Stephanie Sloane, Renée Baillargeon and David Premack. In one experiment 48 19-month-olds watched two giraffe puppets dance. The experimenter gave either one toy to each giraffe or two toys to one giraffe. Meanwhile, Sloane and her colleagues timed how long the infants gazed at the scene until they lost interest — longer looking times indicate that the infants sensed something is wrong. They found that three-quarters of the infants looked longer when one giraffe got both toys, suggesting they detected an unfair distribution.

In the second experiment, two women played with a small pile of toys when an experimenter said, “Wow! Look at all these toys. It’s time to clean them up!” In one scenario both women put the toys away and both got a reward. In another, one woman put all the toys away and both got a reward. Like the first experiment, the researchers found that the youngsters (21-month-olds in this experiment) gazed longer in the second scenario, in which the worker and the slacker received an equal reward. Here’s Sloane on the implications of her research:
We think children are born with a skeleton of general expectations about fairness and these principles and concepts get shaped in different ways depending on the culture and the environment they’re brought up in… helping children behave more morally may not be as hard as it would be if they didn’t have that skeleton of expectations.
study published last October by Marco Schmidt and Jessica Summerville demonstrates similar results. In one experiment Schmidt and Summervile presented 15-month-old babies two videos: one in which an experimenter distributes an equal share of crackers to two recipients and another in which the experimenter distributes an unequal share of crackers (they also did the same procedure with milk). The scientists measured how long the babies gazed at the crackers and milk while they were distributed and found that the babies spent more time looking when one recipient got more food than the other. This prompted Schmidt and Summerville to conclude that
the infants [expecting] an equal and fair distribution of food… were surprised to see one person given more crackers or milk than the other… this provides the first evidence that by at least 15 months of age, human infants possess the rudiments of a sense of fairness in that they expect resources to be allocated equally when observing others.
One of the most cited papers on moral development in the last few years comes from Kiley Hamlin, Karen Wynn and Paul Bloom. In one experiment they used a three-dimensional display and puppets to act out helping/hindering situations for six and ten-month-old infants. For example, a yellow triangle (helper) helped a red circle (climber) up a hill or a blue square (hinderer) pushed the red circle down the hill. After repeating these two scenarios several times an experimenter offered the helper and hinderer to the infants. They found the infants preferred the helper puppet most of the time. When Hamlin et al. pitted the hinderer against a neutral character the infants likewise preferred the neutral character. This experiments suggests the infants prefer those who help others and avoid those who hinder others.

Drawing on these results (and two similar experiments from the same study) as well as data from other child development research, Bloom concludes in a NYTimes article that
babies possess certain moral foundations — the capacity and willingness to judge the actions of others, some sense of justice, gut responses to altruism and nastiness… if we didn’t start with this basic apparatus, we would be nothing more than amoral agents, ruthlessly driven to pursue our self-interest.
This brings me to a brand new study that challenges Hamlin, Wynn and Bloom. The researchers, headed by Dr. Damian Scarf of the University of Otago in New Zealand, note that the scene Hamlin et al. created contains two “conspicuous perceptual events.” The first is a collision between the climber and the helper or the hinderer. The second is a positive bouncing event that occurs when the climber reaches the top of the hill. Scarf and his team hypothesize that the infants are reacting to these events – the aversive collisions and cheerful bouncing – and not deciding from an innate moral sense. In their words, “The helper is viewed as positive because, although associated with the aversive collision event, it is also associated with the more salient and positive bouncing event. In contrast, the hinderer is viewed as negative because it is only associated with the aversive collision event.”

To test this Scarf’s team created two experiments. The first determined whether infants found the collision event aversive. To do this “[they] eliminated the climber bouncing at the top of the hill on help trials and pitted the helper against a neutral character.” The purpose of this twist was to test if the infants’ decisions derived from a moral sense or the attention-grabbing bouncing. “If infants find the collision between the climber and the helper aversive then in the absence of the climber bouncing, infants should select the neutral character.”

They designed the second experiment to determine if infants found the bouncing event positive. To test this they “manipulated whether the climber bounced on help trials (bounce-at-the-top condition), hinder trials (bounce-at-the-bottom condition), or both (bounce-at-both condition).” If the infants base their decisions off of the bouncing event they should select whichever puppet bounces regardless of their role as a helper or hinderer. However, if Hamlin is right and the infants are driven by a moral intuition then they “should display universal preference for the helper because in all three conditions the helper is assisting the climber in achieving its goal of ascending the hill.”

They found evidence in both experiments that the infants were reacting to the two “conspicuous perceptual events” and not driven by potential innate moral intuitions. Here are the scientists:
Experiment 1 demonstrated that, in the absence of bouncing, infants preferred the neutral character over the helper. This finding is consistent with our view that infants find the collision event aversive irrespective of whether the collision occurs between the hinderer and the climber or the helper and the climber. The finding is not consistent with [Hamlin’s] hypothesis because that hypothesis predicts that infants will view the collision between the hinderer and the climber as qualitatively different from the collision between the helper and the climber (i.e., as helping and hindering respectively). Experiment 2 adds further support to the simple association hypothesis by demonstrating that the bouncing event predicts infants’ choices. While the preference for the helper in the bounce-at-the-top condition is consistent with the social evaluation and the simple association hypotheses, the preference for the hinderer in the bounce-at-the-bottom condition and the lack of a preference in the bounce-at-both condition clearly conflicts with the social evaluation hypothesis. If infants’ choices were based on social evaluation then, because the helper assists the climber in both the bounce-at-the-bottom and bounce-at-both conditions, infants should display preference for the helper in both conditions.
Do these results undermine Hamlin et al.’s previous study? It’s not likely. In a response published in the academic journal PNAS Hamlin outlines four shortcomings in Scarf et al.’s experiment: 1) The climber looked different; 2) the climber acted different; 3) the climber appeared to climb the hill on its own during helping trials; 4) the climber moved downwards before the hinderer made contact. Hamlin concludes that, “All of these considerations make it plausible, then, that Scarf et al.’s infants responded to perceptual variables because—unlike in our original study—the goal of the Climber was unclear to the infants and therefore the “helping” and “hindering” events did not strike them as helping or hindering.”

Also important is the fact Hamlin and her colleagues have replicated their findings several times “across several social scenarios that do not involve climbing, colliding, or bouncing.” In addition, numerous studies published by other researchers in the last several years – including the aforementioned studies – provide good evidence that a general sense of fairness and the capacity to judge the actions of others are hard-wired. Scarf and his team are right to call attention to potential sources of error, but the evidence in favor of Hutcheson’s assertion – that the Author of Nature determined us to receive a moral sense – appears robust.
  • Portions of this post were taken from an old post from my previous blog
  • Image via Shuttershock
  • I got the Hutcheson quote here

Monday, September 24, 2012

Fat Storage Is How the Body Protects Itself from Poor Nutrition


We eat too much food (too many calories) and we get fat. Being fat brings a host of other health issues, such as heart and other cardiovascular diseases, high cholesterol, diabetes, and on the list could go. Pretty simple, right? Wrong.

Storing excess calories (energy) as fat is how the body tries to protect us from poor nutritional choices and excess calories, in order to prevent high cholesterol, heart disease, diabetes, and so on.

Let's break it down

We (human beings in Western nations) eat too much, most of which is not healthy, i.e., filled with simple carbohydrates (sugars, white flour, etc), saturated fats and trans fats, and most of it is processed, far from what nature intended.

For example, we go to Burger King and get a BK Quad Stacker (930 calories, 28 grams of saturated fat), a large fries, and a chocolate milkshake. That's lunch or dinner. That meal is easily over 1500 calories, and unless you plan to run a marathon in a couple of hours, that is about 1100 calories more than you need, not to mention all of the saturated fat, simple carbohydrates, sugar, and salt.

The body does its best to do its job. The stomach and large intestine digest the food and send the "nutrients" into the bloodstream to be used for energy. The pancreas gets the message that there is a serious load of energy in the blood, so it produces more insulin to handle the increased calorie load. Meanwhile, the liver is doing its job in converting the extra glucose (sugar) and fats (mostly unhealthy fats in this case) into triglycerides. Some of the saturated fat is also being converted into LDL cholesterol.

At this point, our blood stream is filled with glucose, triglycerides, lipids (fats), cholesterol, and now some insulin is shooting onto the seen. It's job is to kick ass and take names.

The insulin stores glucose in the muscles and liver until they are full (and unless you just worked out, they are already probably full), then it stuffs triglycerides into fat cells, as well as sending extra glucose and fat back to the liver to be converted into triglycerides to be stored as fat.

The fact that the body does this is essential. Too much glucose in the blood, as we know from diabetics, can cause blindness, neuropathy, and other serious health issues. To much fat in the blood clogs the arteries and we have a heart attack or a stroke.

If the body did not store all of this extra "energy" as fat, we would die young, but thinner.

What this explains, in part, is why overweight people can have normal cholesterol, triglyceride, and glucose levels. On the other hand, take these measurements following a meal at their favorite fast food joint and their scores will be off the charts - the heavier we get, the less well our bodies handle unhealthy foods, until we get to a point where the pancreas cannot generate enough insulin anymore.

I don't want to create the wrong idea here - being fat is not healthy. Fat cells are less sensitive to insulin the more full they are, until the body has to make more of them. In addition, fat cells produce estrogen, and the majority of major cancers are estrogen related (including breast and prostate). Finally, when fats cells are unresponsive, the body will store fat in muscle cells, and this has been linked to diabetes. 

Over at the PLoS ONE blog, Obesity Panacea, Peter Janiszewski, Ph.D. reports on the research that supports this version of how the body works. It's very cool, and has some good links.

Not enough, rather than too much fat, causes metabolic problems of obesity

Secular Buddhist Podcast #135: Charles Prebish, Sarah Haynes, Justin Whitaker, Danny Fisher - Two Buddhisms Today

This is a very cool episode of the Secular Buddhist Podcast, a round-table discussion with Charles Prebish, Sarah Haynes, Justin Whitaker, and Rev. Danny Fisher on the current changes in the American Buddhist world.

In this discussion, they look at the increasing divide between traditional Buddhist practice in the U.S. and the widening circles of secular Buddhist practice.

I'm happy to add, on a personal note, that I have been reading Justin Whitaker and Danny Fisher for many years now, their blogs being among the elite Buddhist blogs on the internets.

Episode 135 :: Charles Prebish, Sarah Haynes, Justin Whitaker, Danny Fisher :: Two Buddhisms Today


Today we have a round table discussion with Charles Prebish, Sarah Haynes, Justin Whitaker, and Danny Fisher on the changes in the American Buddhist landscape.

Our cultural landscape is changing, and it seems the rate of change is more rapid than ever. We’ve seen tremendous progress in civil rights, diversity issues, and of particular interest to Buddhists, our communities of practice. There is now a much wider representation in America of traditional Buddhism, and increasingly secular groups. Whatever you find most helpful to you in your practice, it’s likely out there somewhere, or on the way. But, that wasn’t always the case. Buddhism has grown through the pioneering efforts of those from particular traditional backgrounds, and their sanghas reflected that.

Today, we’re going to have a round table discussion that’s a response. Not to the cultural landscape’s change, but to criticisms about past efforts to understand that landscape at the time. Understanding that this is a controversial topic, we’ve invited the participation of four Buddhist scholars to discuss it, and provide their insight and point of view.

 

Charles Prebish

Charles Prebish is among the most prominent scholars in studying the forms that Buddhist tradition has taken in the United States. Dr. Prebish has been an officer in the International Association of Buddhist Studies, and was co-founder of the Buddhism Section of the American Academy of Religion. In 1994, he co-founded the online Journal of Buddhist Ethics, which was the first online peer-reviewed journal in the field of Buddhist Studies. Prebish has also served as editor of the Journal of Global Buddhism and Critical Review of Books in Religion. In 1996, he co-founded the Routledge “Critical Studies in Buddhism” series, and currently co-edits the Routledge “World Religions” series of textbooks. He is also co-editor of the Routeldge Encyclopedia of Buddhism project.

Sarah Haynes

Sarah Haynes is assistant professor in the Department of Philosophy & Religious Studies at Western Illinois University. Her primary area of research is Tibetan Buddhism, specifically Tibetan Buddhist ritual and its manifestations in North America. She has also conducted research on Jodo Shinshu communities in North America and their relationship to Mormon communities in Utah and Alberta. Her publications include: A Relationship of Reciprocity: Globalization, Skilful Means, and Tibetan Buddhism in Canada, in Wild Geese: Studies of Buddhism in Canada; An Exploration of Jack Kerouac’s Buddhism: Text and Life Journal of Contemporary Buddhism; and the forthcoming collection of essays “Wading into the Stream of Wisdom: Essays in Honor of Leslie Kawamura”.


Justin Whitaker

Justin Whitaker is a student of Damien Keown and a PhD candidate at Goldsmiths, University of London. There he is working on a thesis comparing early Buddhist ethics and the work of the German philosopher Immanuel Kant. Mr Whitaker holds a BA (with Honours) in Philosophy from The University of Montana and an MA (with Distinction) in Buddhist Studies from Bristol University. He has extensive experience teaching Buddhist Studies and Philosophy as an Instructor and Teaching Assistant at The University of Montana as well as Antioch University’s Education Abroad programme based in Bodhgaya, India, and currently works as a Distance Education Instructor in Comparative World Religions for Mohave Community College, Arizona. He has presented papers at several academic conferences including “Meditation’s Ethics: Ignatian Spiritual Exercises and Buddhist Metta-Bhavana” at the American Academy of Religion’s 2009 international conference in Montreal as well as “Wriggling Eels in the Wilderness of Views: Studies in Buddhist Ethics” for the Oxford Centre for Buddhist Studies, and “Warnings from the Past, Hope for the Future: The Ethical-Philosophical Unity of Buddhist Traditions” at the International Association of Buddhist Universities UN Day of Vesak, both in 2012.


Danny Fisher

Reverend Danny Fisher is the author of the Patheos blog Off the Cushion, maintains an official website, and writes for Shambhala Sun, Buddhadharma: The Practitioner’s Quarterly, and elephantjournal.com. Rev. Fisher’s commentary on Buddhism in the United States has been featured on CNN, the Religion News Service, E! Entertainment Television, and others. Rev. Fisher earned his Master of Divinity from Naropa University and his Doctorate in Buddhist Studies from University of the West. He is also a professor and Coordinator of the Buddhist Chaplaincy Program at University of the West. He was ordained as a lay Buddhist minister by the Buddhist Sangha Council of Southern California in 2008 and is certified as a mindfulness meditation instructor by Naropa University in association with Shambhala International. He also serves on the advisory council for the Upaya Buddhist Chaplaincy Program, and in 2009 became the first-ever Buddhist member of the National Association of College and University Chaplains.

So, sit back, relax, and have a nice white grape juice.
 
:: Discuss this episode ::
 

Web Links


Documentary - Mustang: A Kingdom on the Edge


This 2011 documentary about the tiny Buddhist kingdom of Mustang is very cool and also sad - the Chinese have gutted Tibet and they have been encroaching into Nepal for a long while now, with Maoist rebels forcing themselves into government and the Nepalese culture.

 The documentary is a little more 47 minutes and was created by al Jazeera.


Mustang: A Kingdom on the Edge

Mustang: A Kingdom on the EdgeWhile Tibetan Buddhism is squeezed inside of China’s borders, there is a place where it still survives intact: Upper Mustang – a once forbidden kingdom high in the Nepalese Himalayas.

Steve Chao travels there to document the fight to preserve an ancient culture, as China expands its influence into Nepal, and the modern world slowly creeps in.

There is a reason for China’s concern. In the 1960s, shortly after the Dalai Lama fled Tibet for India, a Tibetan resistance movement was formed in a place called Mustang.

Mustang, or Lo, as locals call it, is an ancient Tibetan kingdom that is now part of Nepal. Hidden in the Himalayas, the world’s highest mountain range, it is protected by its remoteness, and the fact the only way in and out for centuries was on horseback.

Steven Poole - Your brain on pseudoscience: The rise of popular neurobollocks

A couple of weeks ago, Steven Poole published (in the New Statesman) an entertaining argument against all the neuroscience books that promise simple explanations for the complexity of human experience - he calls them "self-help books dressed up in a lab coat."

Your brain on pseudoscience: The rise of popular neurobollocks

The “neuroscience” shelves in bookshops are groaning. But are the works of authors such as Malcolm Gladwell and Jonah Lehrer just self-help books dressed up in a lab coat?



This is a metaphor.
This is a metaphor. Photograph: Getty Images

An intellectual pestilence is upon us. Shop shelves groan with books purporting to explain, through snazzy brain-imaging studies, not only how thoughts and emotions function, but how politics and religion work, and what the correct answers are to age-old philosophical controversies. The dazzling real achievements of brain research are routinely pressed into service for questions they were never designed to answer. This is the plague of neuroscientism – aka neurobabble, neurobollocks, or neurotrash – and it’s everywhere.

In my book-strewn lodgings, one literally trips over volumes promising that “the deepest mysteries of what makes us who we are are gradually being unravelled” by neuroscience and cognitive psychology. (Even practising scientists sometimes make such grandiose claims for a general audience, perhaps urged on by their editors: that quotation is from the psychologist Elaine Fox’s interesting book on “the new science of optimism”, Rainy Brain, Sunny Brain, published this summer.) In general, the “neural” explanation has become a gold standard of non-fiction exegesis, adding its own brand of computer-assisted lab-coat bling to a whole new industry of intellectual quackery that affects to elucidate even complex sociocultural phenomena. Chris Mooney’s The Republican Brain: the Science of Why They Deny Science – and Reality disavows “reductionism” yet encourages readers to treat people with whom they disagree more as pathological specimens of brain biology than as rational interlocutors.

The New Atheist polemicist Sam Harris, in The Moral Landscape, interprets brain and other research as showing that there are objective moral truths, enthusiastically inferring – almost as though this were the point all along – that science proves “conservative Islam” is bad.

Happily, a new branch of the neuroscienceexplains everything genre may be created at any time by the simple expedient of adding the prefix “neuro” to whatever you are talking about. Thus, “neuroeconomics” is the latest in a long line of rhetorical attempts to sell the dismal science as a hard one; “molecular gastronomy” has now been trumped in the scientised gluttony stakes by “neurogastronomy”; students of Republican and Democratic brains are doing “neuropolitics”; literature academics practise “neurocriticism”. There is “neurotheology”, “neuromagic” (according to Sleights of Mind, an amusing book about how conjurors exploit perceptual bias) and even “neuromarketing”. Hoping it’s not too late to jump on the bandwagon, I have decided to announce that I, too, am skilled in the newly minted fields of neuroprocrastination and neuroflâneurship.

Illumination is promised on a personal as well as a political level by the junk enlightenment of the popular brain industry. How can I become more creative? How can I make better decisions? How can I be happier? Or thinner? Never fear: brain research has the answers. It is self-help armoured in hard science. Life advice is the hook for nearly all such books. (Some cram the hard sell right into the title – such as John B Arden’s Rewire Your Brain: Think Your Way to a Better Life.) Quite consistently, heir recommendations boil down to a kind of neo- Stoicism, drizzled with brain-juice. In a selfcongratulatory egalitarian age, you can no longer tell people to improve themselves morally. So self-improvement is couched in instrumental, scientifically approved terms.

The idea that a neurological explanation could exhaust the meaning of experience was already being mocked as “medical materialism” by the psychologist William James a century ago. And today’s ubiquitous rhetorical confidence about how the brain works papers over a still-enormous scientific uncertainty. Paul Fletcher, professor of health neuroscience at the University of Cambridge, says that he gets “exasperated” by much popular coverage of neuroimaging research, which assumes that “activity in a brain region is the answer to some profound question about psychological processes. This is very hard to justify given how little we currently know about what different regions of the brain actually do.” Too often, he tells me in an email correspondence, a popular writer will “opt for some sort of neuro-flapdoodle in which a highly simplistic and questionable point is accompanied by a suitably grand-sounding neural term and thus acquires a weightiness that it really doesn’t deserve. In my view, this is no different to some mountebank selling quacksalve by talking about the physics of water molecules’ memories, or a beautician talking about action liposomes.”

Shades of grey

The human brain, it is said, is the most complex object in the known universe. That a part of it “lights up” on an fMRI scan does not mean the rest is inactive; nor is it obvious what any such lighting-up indicates; nor is it straightforward to infer general lessons about life from experiments conducted under highly artificial conditions. Nor do we have the faintest clue about the biggest mystery of all – how does a lump of wet grey matter produce the conscious experience you are having right now, reading this paragraph? How come the brain gives rise to the mind? No one knows.

So, instead, here is a recipe for writing a hit popular brain book. You start each chapter with a pat anecdote about an individual’s professional or entrepreneurial success, or narrow escape from peril. You then mine the neuroscientific research for an apparently relevant specific result and narrate the experiment, perhaps interviewing the scientist involved and describing his hair. You then climax in a fit of premature extrapolation, inferring from the scientific result a calming bromide about what it is to function optimally as a modern human being. Voilà, a laboratory-sanctioned Big Idea in digestible narrative form. This is what the psychologist Christopher Chabris has named the “story-study-lesson” model, perhaps first perfected by one Malcolm Gladwell. A series of these threesomes may be packaged into a book, and then resold again and again as a stand-up act on the wonderfully lucrative corporate lecture circuit.

Such is the rigid formula of Imagine: How Creativity Works, published in March this year by the American writer Jonah Lehrer. The book is a shatteringly glib mishmash of magazine yarn, bizarrely incompetent literary criticism, inspiring business stories about mops and dolls and zany overinterpretation of research findings in neuroscience and psychology. Lehrer responded to my hostile review of the book by claiming that I thought the science he was writing about was “useless”, but such garbage needs to be denounced precisely in defence of the achievements of science. (In a sense, as Paul Fletcher points out, such books are “anti science, given that science is supposed to be  our protection against believing whatever we find most convenient, comforting or compelling”.) More recently, Lehrer admitted fabricating quotes by Bob Dylan in Imagine, which was hastily withdrawn from sale, and he resigned from his post at the New Yorker. To invent things supposedly said by the most obsessively studied popular artist of our age is a surprising gambit. Perhaps Lehrer misunderstood his own advice about creativity.

Mastering one’s own brain is also the key to survival in a dog-eat-dog corporate world, as promised by the cognitive scientist Art Markman’s Smart Thinking: How to Think Big, Innovate and Outperform Your Rivals. Meanwhile, the field (or cult) of “neurolinguistic programming” (NLP) sells techniques not only of self-overcoming but of domination over others. (According to a recent NLP handbook, you can “create virtually any and all states” in other people by using “embedded commands”.) The employee using such arcane neurowisdom will get promoted over the heads of his colleagues; the executive will discover expert-sanctioned ways to render his underlings more docile and productive, harnessing “creativity” for profit.

Waterstones now even has a display section labelled “Smart Thinking”, stocked with pop brain tracts. The true function of such books, of course, is to free readers from the responsibility of thinking for themselves. This is made eerily explicit in the psychologist Jonathan Haidt’s The Righteous Mind, published last March, which claims to show that “moral knowledge” is best obtained through “intuition” (arising from unconscious brain processing) rather than by explicit reasoning. “Anyone who values truth should stop worshipping reason,” Haidt enthuses, in a perverse manifesto for autolobotomy. I made an Olympian effort to take his advice seriously, and found myself rejecting the reasoning of his entire book.

Modern neuro-self-help pictures the brain as a kind of recalcitrant Windows PC. You know there is obscure stuff going on under the hood, so you tinker delicately with what you can see to try to coax it into working the way you want. In an earlier age, thinkers pictured the brain as a marvellously subtle clockwork mechanism, that being the cutting-edge high technology of the day. Our own brain-as-computer metaphor has been around for decades: there is the “hardware”, made up of different physical parts (the brain), and the “software”, processing routines that use different neuronal “circuits”. Updating things a bit for the kids, the evolutionary psychologist Robert Kurzban, in Why Everyone (Else) Is a Hypocrite, explains that the brain is like an iPhone running a bunch of different apps.

Such metaphors are apt to a degree, as long as you remember to get them the right way round. (Gladwell, in Blink – whose motivational selfhelp slogan is that “we can control rapid cognition” – burblingly describes the fusiform gyrus as “an incredibly sophisticated piece of brain software”, though the fusiform gyrus is a physical area of the brain, and so analogous to “hardware” not “software”.) But these writers tend to reach for just one functional story about a brain subsystem – the story that fits with their Big Idea – while ignoring other roles the same system might play. This can lead to a comical inconsistency across different books, and even within the oeuvre of a single author.

Is dopamine “the molecule of intuition”, as Jonah Lehrer risibly suggested in The Decisive Moment (2009), or is it the basis of “the neural highway that’s responsible for generating the pleasurable emotions”, as he wrote in Imagine? (Meanwhile, Susan Cain’s Quiet: the Power of Introverts in a World That Can’t Stop Talking calls dopamine the “reward chemical” and postulates that extroverts are more responsive to it.) Other recurring stars of the pop literature are the hormone oxytocin (the “love chemical”) and mirror neurons, which allegedly explain empathy. Jonathan Haidt tells the weirdly unexplanatory micro-story that, in one experiment, “The subjects used their mirror neurons, empathised, and felt the other’s pain.” If I tell you to use your mirror neurons, do you know what to do? Alternatively, can you do as Lehrer advises and “listen to” your prefrontal cortex? Self-help can be a tricky business.

Cherry-picking

Distortion of what and how much we know is bound to occur, Paul Fletcher points out, if the literature is cherry-picked.

“Having outlined your theory,” he says, “you can then cite a finding from a neuroimaging study identifying, for example, activity in a brain region such as the insula . . . You then select from among the many theories of insula function, choosing the one that best fits with your overall hypothesis, but neglecting to mention that nobody really knows what the insula does or that there are many ideas about its possible function.”

But the great movie-monster of nearly all the pop brain literature is another region: the amygdala. It is routinely described as the “ancient” or “primitive” brain, scarily atavistic. There is strong evidence for the amygdala’s role in fear, but then fear is one of the most heavily studied emotions; popularisers downplay or ignore the amygdala’s associations with the cuddlier emotions and memory. The implicit picture is of our uneasy coexistence with a beast inside the head, which needs to be controlled if we are to be happy, or at least liberal. (In The Republican Brain, Mooney suggests that “conservatives and authoritarians” might be the nasty way they are because they have a “more active amygdala”.) René Descartes located the soul in the pineal gland; the moral of modern pop neuroscience is that original sin is physical – a bestial, demonic proto-brain lurking at the heart of darkness within our own skulls. It’s an angry ghost in the machine.

Indeed, despite their technical paraphernalia of neurotransmitters and anterior temporal gyruses, modern pop brain books are offering a spiritual topography. Such is the seductive appeal of fMRI brain scans, their splashes of red, yellow and green lighting up what looks like a black intracranial vacuum. In mass culture, the fMRI scan (usually merged from several individuals) has become a secular icon, the converse of a Hubble Space Telescope image. The latter shows us awe-inspiring vistas of distant nebulae, as though painstakingly airbrushed by a sci-fi book-jacket artist; the former peers the other way, into psychedelic inner space. And the pictures, like religious icons, inspire uncritical devotion: a 2008 study, Fletcher notes, showed that “people – even neuroscience undergrads – are more likely to believe a brain scan than a bar graph”.

In The Invisible Gorilla, Christopher Chabris and his collaborator Daniel Simons advise readers to be wary of such “brain porn”, but popular magazines, science websites and books are frenzied consumers and hypers of these scans. “This is your brain on music”, announces a caption to a set of fMRI images, and we are invited to conclude that we now understand more about the experience of listening to music. The “This is your brain on” meme, it seems, is indefinitely extensible: Google results offer “This is your brain on poker”, “This is your brain on metaphor”, “This is your brain on diet soda”, “This is your brain on God” and so on, ad nauseam. I hereby volunteer to submit to a functional magnetic-resonance imaging scan while reading a stack of pop neuroscience volumes, for an illuminating series of pictures entitled This Is Your Brain on Stupid Books About Your Brain.

None of the foregoing should be taken to imply that fMRI and other brain-investigation techniques are useless: there is beautiful and amazing science in how they work and what well-designed experiments can teach us. “One of my favourites,” Fletcher says, “is the observation that one can take measures of brain activity (either using fMRI or EEG) while someone is learning . . . a list of words, and that activity can actually predict whether particular words will be remembered when the person is tested later (even the next day). This to me demonstrates something important – that observing activity in the brain can tell us something about how somebody is processing stimuli in ways that the person themselves is unable to report. With measures like that, we can begin to see how valuable it is to measure brain activity – it is giving us information that would otherwise be hidden from us.”
In this light, one might humbly venture a preliminary diagnosis of the pop brain hacks’ chronic intellectual error. It is that they misleadingly assume we always know how to interpret such “hidden” information, and that it is always more reliably meaningful than what lies in plain view. The hucksters of neuroscientism are the conspiracy theorists of the human animal, the 9/11 Truthers of the life of the mind.

Steven Poole is the author of the forthcoming book “You Aren’t What You Eat”, which will be published by Union Books in October.

Sunday, September 23, 2012

Skeptiko Podcast 184: Dr. Rupert Sheldrake Sets Science Free From Dogma

Dr. Rupert Sheldrake was the guest recently on the Skeptiko podcast with Alex Tsakiris - there to discuss his newest book, Science Set Free: 10 Paths to New Discovery.

184: Dr. Rupert Sheldrake Sets Science Free From Dogma

September 5th, 2012 Alex Tsakiris

Interview examines how scientific assumptions about materialism and consciousness have constrained us.

 
Join Skeptiko host Alex Tsakiris for an interview with biologist and author Dr. Rupert Sheldrake about his new book, Science Set Free: 10 Paths to New Discovery.  During the interview Sheldrake explains his post-materialist worldview:

Alex Tsakiris: I think that’s part of the problem. I think all these questions of the spiritual are not buried deep in these scientific questions you pose — they’re right there under the paper-thin surface of them.  Take survival of consciousness, if we just look at the data and we say, “That seems to suggest that consciousness survives death,” well, for any man on the street, as well as any scientist, that proposition immediately launches us into deep questions of the spiritual. I don’t know how you can get around that.

Dr. Rupert Sheldrake: I think it’s quite important to decouple these.  Although the science is very relevant to these issues it doesn’t map in such a way that to be an Atheist you’ve got to be a Dawkins-style materialist or to be a religious person you’ve got to be a dualist.

I think what we’re heading for is a post-materialist worldview which is what my book is trying to point the way towards. We could have a holistic way of looking at things, a scientific investigation into things, which leaves these bigger questions open. For example, in one chapter of the book where I’m dealing with the dogma that memories are stored as material traces inside the brain that becomes the question, are memories stored as material traces in the brain?

I’m not confident memories are stored in brains. I think that brains are more like tuning devices, more like TV receivers than like video recorders. Now that’s really a scientific question, how is memory stored? We can do experiments to try and find out how memory works.

So for materialists it’s a simple two-step argument. Memories are stored in brains; the brain decays at death, therefore, memories are wiped out at death. Whereas, if memories are not stored in brains then the memories themselves are not wiped out at death. They’re potentially accessible. That doesn’t prove they are accessed, that there is personal survival. It just means that’s a possibility whereas with materialism it’s an impossibility. So one position leaves the question closed and the other leaves it open.

Rupert Sheldrake’s Website

Play It:
Listen Now:
Download MP3 (38 min.)


.
Read It:

Alex Tsakiris: Today we welcome back to Skeptiko biologist and author, Dr. Rupert Sheldrake. He’s here to talk about his latest book, The Science Delusion. If you’re here in the U.S. you’ll find it at Amazon under the title, Science Set Free.

Rupert, welcome back and thanks for joining me.

Dr. Rupert Sheldrake:  It’s very good to be with you again.

Read it if you so desire.

How Do Scientists Measure Dream Content?

Lucid: How do neuroscientists measure dream content?

This article appeared at Science Codex back in August - a look at how scientists try to understand and measure the content of a person's dreams by analyzing brain activity. Researchers at the Max Planck Institute of Psychiatry in Munich, the Charité hospital in Berlin and the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig used lucid dreamers as subjects, with the benefit of being to check recorded content vs. experienced content.

Lucid: How do neuroscientists measure dream content?

Posted On: August 18, 2012

The ability to dream is a fascinating aspect of the human mind. However, how the images and emotions that we experience so intensively when we dream form in our heads remains a mystery.
It has not been possible to measure dream content but now Max Planck scientists working with colleagues from the Charité hospital in Berlin have succeeded in analyzing the activity of the brain during dreaming.

They were able to do this with the help of lucid dreamers, hypothesized in 1913 by Dutch psychiatrist Frederik van Eeden as people who claim to become aware of their dreaming state and are able to alter the content of their dreams. The scientists measured that the brain activity during the dreamed motion matched the one observed during a real executed movement in a state of wakefulness.

Methods like functional magnetic resonance imaging (fMRI) have enabled scientists to visualize and identify the precise spatial location of brain activity during sleep. However, up to now, researchers have not been able to analyse specific brain activity associated with dream content, as measured brain activity can only be traced back to a specific dream if the precise temporal coincidence of the dream content and measurement is known. Whether a person is dreaming is something that could only be reported by the individual himself.

Scientists from the Max Planck Institute of Psychiatry in Munich, the Charité hospital in Berlin and the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig availed of the ability of lucid dreamers to dream consciously for their research. Lucid dreamers were asked to become aware of their dream while sleeping in a magnetic resonance scanner and to report this "lucid" state to the researchers by means of eye movements. They were then asked to voluntarily "dream" that they were repeatedly clenching first their right fist and then their left one for ten seconds.

This enabled the scientists to measure the entry into REM sleep – a phase in which dreams are perceived particularly intensively – with the help of the subject's electroencephalogram (EEG) and to detect the beginning of a lucid phase. The brain activity measured from this time onwards corresponded with the arranged "dream" involving the fist clenching. A region in the sensorimotor cortex of the brain, which is responsible for the execution of movements, was actually activated during the dream. This is directly comparable with the brain activity that arises when the hand is moved while the person is awake. Even if the lucid dreamer just imagines the hand movement while awake, the sensorimotor cortex reacts in a similar way.

This is a patient in a functional magnetic resonance imaging machine.
(Photo Credit: MPI of Psychiatry)

The coincidence of the brain activity measured during dreaming and the conscious action shows that dream content can be measured. "With this combination of sleep EEGs, imaging methods and lucid dreamers, we can measure not only simple movements during sleep but also the activity patterns in the brain during visual dream perceptions," says Martin Dresler, a researcher at the Max Planck Institute for Psychiatry.

The researchers were able to confirm the data obtained using MR imaging in another subject using a different technology. With the help of near-infrared spectroscopy, they also observed increased activity in a region of the brain that plays an important role in the planning of movements. "Our dreams are therefore not a 'sleep cinema' in which we merely observe an event passively, but involve activity in the regions of the brain that are relevant to the dream content," explains Michael Czisch, research group leader at the Max Planck Institute for Psychiatry.

This shows activity in the motor cortex during the movement of the hands while awake (left) and during a dreamed movement (right). Blue areas indicate the activity during a movement of the right hand, which is clearly demonstrated in the left brain hemisphere, while red regions indicate the corresponding left-hand movements in the opposite brain hemisphere. (Photo Credit: MIP of Psychiatry.)

Omnivore Links - Philosophy, Religion, and Science

A veritable bonanza of links from Bookforum's Omnivore blog - on the value of philosophy, the centrality of religion, and how science advances. Enjoy!


 * * * * * * *
 
* * * * * * *