Wednesday, December 26, 2012

Top Brain, Psychology, and Neuroscience Stories from 2012

These stories come from three sources - Forbes, BPS Psychology, and PsyBlog. Many of these stories appeared here or in my Facebook feed, and several more are pieces I had missed.

For the first collection - this one comes from Forbes and was compiled by science and technology contributor David DiSalvo. Good list - to me the best story of these 10 is that Dr. Robert L. Spitzer finally recanted his research on reparative therapy and asked the Archives of Sexual Behavior, where his APA presentation from 2001 was published in 2003, to retract it. A lot of damage has been done already because of this paper, but at least he asked the journal to retract the paper.

The next collections will be posted tomorrow and Friday.

Forbes: The Top 10 Brain Science and Psychology Stories of 2012




David DiSalvo, Contributor

I write about science, technology, and the cultural ripples of both.

12/23/2012

 
This is the fourth year that I’ve chronicled some of the best brain science and psychology stories from the past 12 months, and every year it gets a little harder because the amount of research published each month just keeps growing. So this year, I’m narrowing the list by choosing the top 10 stories covered on Neuropsyched in 2012. I’ve focused on pieces covering research and research-spawned developments (in other words, there aren’t any top 10 lists in this top 10 list).

1. Humans Aren’t the Only Apes that Have a Midlife Crisis

Withdrawal, frustration, sadness — all are considered hallmarks of the human midlife crisis. Until now, the collection of factors cited as bringing on the angst have included societal and economic pressures that exert psychological forces strong enough to bend our lives into the famous U-shaped curve of happiness.

But research published in the Proceedings of the National Academy of Sciences could drastically alter those assumptions by bringing another no-less-ominous factor to the table: biology. It seems our cousins the great apes also experience midlife crises, and they don’t need the allure of a new Lexus or hair transplants to get them there.

First, let’s put the midlife crisis in perspective with what has been the accepted definition for many years. By midlife, the challenges of achievement have wound down and been replaced by responsibilities like raising a family, paying for a house, and trying to climb the next tenuous wrung on the corporate ladder. Taken altogether, these pressures drop us to the bottom of the U in the U-shaped curve of happiness.

But then, after what could be 10-15 years or more, the pressures start letting up. The burdens become less arduous, and we start our ascent from the bottom of the U into happier days when we experience more freedom to pursue what makes us fulfilled.

If social / psychological explanations were solely adequate for enlightening human behavior, the description I just gave could live on forever more. Good thing there are a few other primates on the planet to set us straight.

Researchers Andrew Oswald, a behavioral economist from the University of Warick and Alexander Weiss, a primatologist from the University ofEdinburgh, assembled an international team of experts to conduct a “well-being census” of 336 chimpanzees and 172 orangutans of all ages living in research facilities and zoos spanning five countries.

The team worked with the primates’ keepers and asked them to complete questionnaires designed to assess the mood, pleasure-seeking drive, and other personality traits of the apes across each of their life spans. The keepers were also asked to assess how effective each ape was in achieving the human equivalent of goals — becoming dominant, winning a mate, or learning to use a tool or toy in a novel way.

The results of these and other questions were analyzed and composite well-being scores were plotted along the apes’ life spans. As it turns out, they also have a distinctive U-shaped curve, and it looks a lot like ours. Around the age of 30, the approximate midpoint of an average chimp’s or orangutan’s life span, the apes experienced less energy, lowered moods, less willingness to engage with the group, and less gumption to achieve anything new.

What this finding tells us is that we have every reason to believe that biology exerts a strong influence in the human midlife crisis as well, although it’s not exactly clear why. Neurobiology, among other disciplines, now has the stage to carry on new research that could eventually unlock the reasons.

“This opens a whole new box in the effort to explain the midlife dip in well-being,” Andrew Oswald told the L.A.Times. “It makes one’s head spin.”


2. Receiving a Compliment has Same Effect as Receiving Cash

Compliments may not pay the rent, but according to new research, they help improve performance in a similar way to receiving a cash reward. 

Caroline von Tuempling/Getty Images

Researchers recruited 48 adults for the study who were asked to learn and perform a specific finger pattern (pushing keys on a keyboard in a particular sequence as fast as possible in 30 seconds). Once participants had learned the finger exercise, they were separated into three groups.

One group included an evaluator who would compliment participants individually; another group involved individuals who would watch another participant receive a compliment; and the third group involved individuals who evaluated their own performance on a graph.

When the participants were asked to repeat the finger exercise the next day, the group of participants who received direct compliments from an evaluator performed significantly better than participants from the other groups. The result indicates that receiving a compliment after exercising stimulated the individuals to perform better even a full day afterward.

According to Professor Norihiro Sadato, the study lead and professor at the National Institute for Physiological Sciences in Japan, ”To the brain, receiving a compliment is as much a social reward as being rewarded money. We’ve been able to find scientific proof that a person performs better when they receive a social reward after completing an exercise. Complimenting someone could become an easy and effective strategy to use in the classroom and during rehabilitation.”

The researchers had previously discovered that the same area of the brain affected in this study, the striatum, is activated when a person is rewarded a compliment or cash.

Why might this happen? Odd as it may sound, the answer is probably closely related to the function of sleep. Researchers theorize that complimenting someone’s efforts acts as a catalyst for better “skill consolidation” during sleep. To account for the sleep variable, researchers in this study kept close tabs on the duration and quality of sleep of the participants. From this and previous studies, it seems as though praise provides the right memory boost for the brain to more efficiently consolidate learning while we’re snoozing. Receiving a cash incentive appears to trigger the same effect.

The practical takeaway: if you’re in a position of authority (manager, teacher, etc), be sure to use compliments (and/or spot bonuses) as a means to encourage learning new skills. You may find that your underlings come back the next day with surprising improvements.

The study was published in the open-access journal PLOS One.


3. Could We One Day Switch Off Bad Habits in Our Brains?

Imagine one day being able to consult with a doctor about “switching off” your smoking habit with a day of outpatient surgery.

That’s the possibility raised by a new study conducted by MIT neuroscientists aimed at finding the master switch in the brain that controls habits. Researchers found that a small region of the brain’s prefrontal cortex, where most thought and planning occurs, is responsible for moment-by-moment control of which habits are switched on at a given time.

“We’ve always thought — and I still do — that the value of a habit is you don’t have to think about it. It frees up your brain to do other things,” says Institute Professor Ann Graybiel, a member of the McGovern Institute for Brain Research at MIT. “However, it doesn’t free up all of it. There’s some piece of your cortex that’s still devoted to that control.”

As we’re all well aware, old habits die hard, and that’s because they are deeply wired into our brains. That’s great in some cases because it allows our brain to expend energy on other things while a habitual behavior, such as driving to work, occurs with very little thought required. But in other cases habits can wreak havoc with our lives, as with obsessive-compulsive disorder. And sometimes what was once a beneficial habit continues even though it no longer benefits us.

The MIT team simulated that scenario with rats trained to run a T-shaped maze. As the rats approached the decision point, they heard a tone indicating whether they should turn left or right. When they chose correctly, they received a reward — chocolate milk (for turning left) or sugar water (for turning right).

To show that the behavior was habitual, the researchers eventually stopped giving the trained rats any rewards, and found that they continued running the maze flawlessly. The researchers then offered the rats chocolate milk in their cages, but mixed it with lithium chloride, which causes light nausea. The rats still continued to run left when cued to do so, but they stopped drinking the chocolate milk.

Once they had shown that the habit was fully ingrained, the researchers wanted to see if they could break it by interfering with a part of the prefrontal cortex known as the infralimbic (IL) cortex. Although the neural pathways that encode habitual behavior appear to be located in deep brain structures known as the basal ganglia, it has been shown that the IL cortex is also necessary for such behaviors to develop.

Using optogenetics, a technique that allows researchers to inhibit specific cells with light, the researchers turned off IL cortex activity for several seconds as the rats approached the point in the maze where they had to decide which way to turn.

Immediately, the rats stopped running to the left (the side with the tainted chocolate milk). This suggests that turning off the IL cortex switches the rats’ brains from an “automatic, reflexive mode to a mode that’s more cognitive or engaged in the goal — processing what exactly it is that they’re running for,” according to Kyle Smith, lead author of the paper.

Once broken of the habit of running left, the rats soon formed a new habit, running to the right side every time, even when cued to run left. The researchers showed that they could break this new habit by once again inhibiting the IL cortex with light. To their surprise, they found that these rats immediately regained their original habit of running left when cued to do so.

“This habit was never really forgotten,” Smith says. “It’s lurking there somewhere, and we’ve unmasked it by turning off the new one that had been overwritten.”

So what does this mean for us? First, it appears that old habits can be broken but they aren’t forgotten. They’re replaced by new habits, but the old habit is still lurking. And it seems that the IL cortex (the “master switch”) favors new habits over old ones.

We are, of course, a long way from testing this technique in humans. But eventually, according to Graybiel, it’s possible the technology will evolve to the point where it might be a feasible option for treating disorders involving overly repetitive or addictive behavior.

The study was published in the Proceedings of the National Academy of Sciences.


4. How One Flawed Study Spawned a Decade of Lies

In 2001, Dr. Robert L. Spitzer, psychiatrist and professor emeritus of Columbia University, presented a paper at a meeting of the American Psychiatric Association about something called “reparative therapy” for gay men and women. By undergoing reparative therapy, the paper claimed, gay men and women could change their sexual orientation. Spitzer had interviewed 200 allegedly former-homosexual men and women that he claimed had shown varying degrees of such change; all of the participants provided Spitzer with self reports of their experience with the therapy.

Spitzer, now 79 years old, was no stranger to the controversy surrounding his chosen subject. Thirty years earlier, he had played a leading role in removing homosexuality from the list of mental disorders in the association’s diagnostic manual. Clearly, his interest in the topic was more than a passing academic curiosity – indeed, it wouldn’t be a stretch to say he seemed invested in demonstrating that homosexuality was changeable, not unlike quitting smoking or giving up ice cream.

Fast forward to 2012, and Spitzer is of quite a different mind. In April he told a reporter with The American Prospect that he regretted the 2001 study and the effect it had on the gay community, and that he owed the community an apology. And in May he sent a letter to the Archives of Sexual Behavior, which published his work in 2003, asking that the journal retract his paper.

Spitzer’s mission to clean the slate is commendable, but the effects of his work have been coursing through the homosexual community like acid since it made headlines a decade ago. His study was seized upon by anti-homosexual activists and therapists who held up Spitzer’s paper as proof that they could “cure” patients of their sexual orientation.

Spitzer didn’t invent reparative therapy, and he isn’t the only researcher to have conducted studies claiming that it works, but as an influential psychiatrist from a prestigious university, his words carried a lot of weight.

In his recantation of the study, he says that it contained at least two fatal flaws: the self reports from those he surveyed were not verifiable, and he didn’t include a control group of men and women who didn’t undergo the therapy for comparison. Self reports are notoriously unreliable, and though they are used in hundreds of studies every year, they are generally regarded as thin evidence at best. Lacking a control group is a fundamental no-no in social science research across the board. The conclusion is inescapable — Spitzer’s study was simply bad science.

What’s remarkable is that this classic example of bad science was approved for presentation at a conference of the leading psychiatric association, and was subsequently published in a peer-reviewed journal of the profession. Spitzer now looks back with regret and critically dismantles his work, but the truth is that his study wasn’t credible from the beginning. It only assumed a veneer of credibility because it was stamped with the imprimatur of his profession.

Why this occurred is a bit more complicated than a mere case of professional cronyism. For many years before his paper on reparative therapy, Spitzer had conducted studies that evaluated the efficacy of self-reporting as a tool to assess a variety of personality disorders and depression. He was a noted expert on the development of diagnostic questionnaires and other assessment measures, and his work was influential in determining whether an assessment method was valuable or should be discarded.

Little wonder, then, that his paper on reparative therapy–which used an interview method that Spitzer recognized as reliable–was accepted by the profession. This wasn’t just anyone claiming that the self reports were valid, it was one of the most highly regarded diagnostic assessment experts in the world.

Reading the study now, I’m sure Spitzer is embarrassed by its flaws. Not only did he rely on self reports, but he conducted the participant interviews by phone, which escalates unreliability to the doesn’t-pass-the-laugh-test level. By phone, researchers aren’t able to evaluate essential non-verbal cues that might cast doubts on verbal responses. Phone interviews, along with written interviews, carry too much guesswork baggage to be valuable in a scientific study, and Spitzer certainly knew that.

The object lesson worth drawing from this story is that just one instance of bad science given the blessing of recognized experts can lead to years of damaging lies that snowball out of control. Spitzer cannot be held solely responsible for what happened after his paper was published, but he’d probably agree now that the study should never have been presented in the first place. At the very least, his example may help prevent future episodes of the same.


5. What Makes Presidents and Psychopaths Similar?

On October 14, 1912, just before giving a scheduled speech in Milwaukee, Theodore Roosevelt was shot in the chest by would-be assassin John Schrank. 
 
Roosevelt not only survived the attempt on his life, but went on to deliver his speech as scheduled. He began by saying,

“I don’t know whether you fully understand that I have just been shot; but it takes more than that to kill a Bull Moose. But fortunately I had my manuscript, so you see I was going to make a long speech, and there is a bullet – there is where the bullet went through – and it probably saved me from it going into my heart. The bullet is in me now, so that I cannot make a very long speech, but I will try my best.”

What explains Roosevelt’s dauntlessness? New research published in the Journal of Personality and Social Psychology suggests that presidents and psychopaths share a psychological trait that may shed light on what made Teddy such a unique character.

The trait is called “fearless dominance,” defined as the “boldness associated with psychopathy.” Researchers say that when found in the psychological makeup of presidents, it’s “associated with better rated presidential performance, leadership, persuasiveness, crisis management, Congressional relations, and allied variables; it was also associated with several largely or entirely objective indicators of presidential performance, such as initiating new projects and being viewed as a world figure.”

Researchers tested their hypothesis in the 42 U.S. presidents up to and including George W. Bush using (a) psychopathy trait estimates derived from personality data completed by historical experts on each president, (b) independent historical surveys of presidential leadership, and (c) largely or entirely objective indicators of presidential performance.

More than 100 experts, including biographers, journalists and scholars who are established authorities on one or more U.S. presidents, evaluated their target presidents using the data derived from the sources listed above.

The results:

Theodore Roosevelt ranked highest in fearless dominance, followed by

  • John F. Kennedy,
  • Franklin D. Roosevelt,
  • Ronald Reagan,
  • Rutherford Hayes,
  • Zachary Taylor,
  • Bill Clinton,
  • Martin Van Buren,
  • Andrew Jackson,
  • and George W. Bush.

6. Why Jerks Get Ahead

As much as we’d rather not admit it, jerks often get ahead in our world — usually at the expense of a lot of other people along the way. Psychological 
research over the past few years is revealing why. As it turns out, acting like a jerk isn’t the secret to reaping the rewards of jerkiness. The real secret is simply letting others place you on a pedestal.

The most recent study illustrating this point was covered in the Wall Street Journal in a piece entitled, ”Why Are We Overconfident?” The study wanted to uncover what adaptive advantage overconfidence could possibly convey, since it so often leads to errors that don’t benefit us. The short answer is that even if overconfidence produces subpar results, others still perceive it positively. Quoting from the article:

In one of several related experiments, researchers had people take a geography quiz —first alone, then in pairs. The task involved placing cities on a map of North America unmarked by state or national borders. The participants rated themselves on their own abilities and rated each other, secretly, on a number of qualities.

As expected, most people rated their own geographic knowledge far higher than actual performance would justify. In the interesting new twist, however, the people most prone to overrate themselves got higher marks from their partners on whether they “deserved respect and admiration, had influence over the decisions, led the decision-making process, and contributed to the decisions.”

In other words, overconfident people are perceived as having more social status, and social status is golden.

A study last year highlighted a similar result, but this time with respect to another jerk-marquis trait: rudeness. Being rude is a categorically negative behavior by most standards, and to suggest otherwise–that is, to mount a defense of rudeness–would be a really strange thing to do. But psychology research is often at its best when it endorses positions that at first glance seem awfully strange.

And so it is with rudeness, because while most of us deplore it, research suggests that we also see it as a sign of power. A study published in the journal Social Psychological and Personality Science indicated that the ruder someone acts, the more convinced observers become that he or she is powerful, and therefore does not have to respect the same rules the rest of us bow to.

In one of the experiments, study participants read about a visitor to an office who marched in and poured himself a cup of “employee only” coffee without asking. In another case they read about a bookkeeper that flagrantly bent accounting rules. Participants rated the rule breakers as more in control and powerful compared to people who didn’t steal the coffee or break accounting rules.

In another experiment participants watched a video of a man at a sidewalk cafĂ© put his feet on another chair, tap cigarette ashes on the ground and rudely order a meal. Participants rated the man as more likely to “get to make decisions” and able to “get people to listen to what he says” than participants who saw a video of the same man behaving politely.

What this study appears to indicate is that violating norms is viewed by others as a sign of power, even if the observers would otherwise judge those violations as rude or flatly wrong. Considering many of the openly rude jerks we venerate, these findings make a lot of sense. (Though I would like to see a follow on study that examines observer perceptions when the rude rule breakers are caught. Perhaps it’s less the rudeness and corruption we admire, and more the ability to get away with it that intrigues us. Maybe we’re just a little smitten with the charisma of villainy.)

Taken together with the results of the study on overconfidence, it would seem that jerks are inherently quite good at putting one over on us. In fact, they don’t even have to try. They just need to work their trade and earn the praise of their peers. 

7. How Stress Damages Your Mental Health

Researchers at the Society for Neuroscience meeting in New Orleans (Oct 13-17, 2012) presented studies showing how stress, no matter its cause, alters brain circuitry in ways that can have long-term effects on mental health.

Research by Dipesh Chaudhury of the Mount Sinai School of Medicine in New York shows that traumatic events appear to cause depression by derailing the brain’s so-called reward system, which normally causes pleasurable feelings whenever we engage in fun activities like spending time with friends. People who have suffered major stress, such as soldiers returning from combat, often report that they no longer find pleasure in these things.

Mice respond in a similar way to traumatic events, Chaudhury says. And his research shows that this response can be prevented by reducing the activity of certain brain cells involved in the reward system. [Source: NPR, October 15, 2012] A drug causing a similar outcome could eventually be effective in humans.

Stress also causes the release of chemicals that impair the function of the prefrontal cortex, home of higher level thinking. When we experience acute stress, these chemicals–including cortisol and norepinephrine–heighten our reactive tendencies by muting our reflective tendencies, leading to everything from anxiety to aggression to depression.

One of the drugs that appear to reverse these effects is ketamine (I wrote about it recently here), an anesthetic that has the ability to rejuvenate damaged nerve cells in hours, potentially making it a groundbreaking new type of antidepressant Derivatives of the drug are already in human trials.

The American Psychological Association’s “Stress in America” report provides a useful table, shown below, indicating the effects of stress on your body, your mood, and your behavior. 
Common effects of stress …  
On your body . . .

  • Headache
  • Muscle tension or pain
  • Chest pain
  • Fatigue
  • Change in sex drive
  • Stomach upset
  • Sleep problems
On your mood . . . 

  • Anxiety
  • Restlessness
  • Lack of motivation or focus
  • Irritability or anger
  • Sadness or depression
On your behavior . . . 

  • Overeating or undereating
  • Angry outbursts
  • Drug or alcohol abuse
  • Tobacco use
  • Social withdrawal


Source: American Psychological Association’s “Stress in America” Report


8. Are Two Heads Really Better Than One?

Group thinking has been a popular topic in behavioral research for a long time, particularly so in the last couple of decades. The judgment of one person can be called into question for a hundred different reasons – everything from preexisting beliefs to confirmation bias and beyond.

But if you add another mind to the mix, then theoretically a buffer against some of those biases has been introduced, and better judgments should result.

Or so the theory goes.

A recent study published in the journal Psychological Science flips this idea on its head by asking if two people may actually produce worse judgments, not because together they aren’t capable of making a good decision – but precisely because they are so confident that they can.

Researchers tested this hypothesis with 252 subjects, dividing them into a group of individual decision-makers and a group of partners (referred to as dyads). They were given a set of questions that required estimated answers, (i.e. “What percentage of Americans own pets?”). For each question, they were also asked to rank their level of confidence in their answer on a scale of 1 to 5. To make things more interesting, subjects earned $30 for making estimates, but lost $1 for each percentage point their answer deviated from correct.

Each individual or dyad was then given a “peer advisor” opinion on their responses and told they could choose to revise their answers, if they wished, based on the new information.

The results showed that the dyads were more confident in their responses than individuals, and also chose to ignore advisor input more often than individuals. But they were also statistically no better than individuals in making correct estimates.

While notable, that isn’t the key finding of this study. The most interesting statistic is revealed between the subjects’ initial estimates and revised estimates. Individuals who chose to revise their answers based on the advisors’ opinions reduced their error rate by about 10 percentage points. Dyads that revised their answers only improved by about 5 percentage points.

The reason why has everything to do with the confidence-buttressing effect of two people working together. Individuals were willing to make larger revisions in their estimates based on new information, while the dyads made relatively small revisions, if any. The research team dubs this the “cost of collaboration.” If the dyads were more willing to integrate new information into their judgments, then they could potentially produce better results than individuals; but their reluctance to consider new information added no value to the end result.

Psychologist Julie Minson, co-lead of the study, says these findings don’t negate the value of group decision-making, but they do highlight a need for caution. “If people become aware that collaboration leads to an increase in overconfidence, you can set up ways to mitigate it. Teams could be urged to consider and process each others’ inputs more thoroughly.”

The same goes for a couple choosing a mortgage or a car, Minson adds. “Just because you make a decision with someone else and you feel good about it, don’t be so sure that you’ve solved the problem and you don’t need help from anybody else.”


9. Sleeplessness Causes Our Mental Circuits to Overheat

We intuitively know that sleep is important, and a great deal of research on the health effects of sleeplessness backs up this belief. But what exactly isgoing on in our brains when we don’t get enough shuteye?

Researchers tackled this question in a new study that suggests our brains become bundles of hyper-reactive nerve cells as the sleepless hours tick by. In a sense, our noggins overheat when we deprive them of necessary down time–bad news for those of us who work into the wee hours.

The research team, led by Marcello Massimini of the University of Milan, delivered a stout magnetic current to study participants’ brains that set off a cascade of electrical responses throughout their nerve cells. The team then measured the strength of this electrical response in the frontal cortex, a brain region that’s involved in making executive decisions, using nodes attached to participants’ scalps. This procedure was completed a day before a night of sleep deprivation and repeated afterward.

The results: participants’ electrical responses were significantly stronger after a night of sleep deprivation than they were the previous day. The effect was corrected by one good night’s sleep.

Writing in Science News, Laura Sanders points out that the results reinforce the most widely held theory of why we sleep:

During waking hours, the brain accumulates connections between nerve cells as new things are learned. Sleep, the theory says, sweeps the brain of extraneous clutter, leaving behind only the most important connections.

The study was published in the February 7th, 2012 issue of the journal, Cerebral Cortex.


10. How Your Brain Could be Keeping You Fat

Neurogenesis is a wonderful word that means our brains continue to grow new neurons throughout our lifetimes. Not long ago, the brain was thought of as a static hunk of tissue that stopped growing after a neuronal “pruning” period early in our lives.

With time, neuroscience research uncovered two parts of the brain that evidence neurogenesis: the hippocampus, associated with memory formation, and the olfactory bulb, associated with the sense of smell.

Now, a study has uncovered a third part of the brain that, at least in mice, shows positive signs of neurogenesis: the hypothalamus, associated with body temperature, metabolism, sleep, hunger, thirst and a few other critical functions.

The news about this particular form of neurogenesis, however, isn’t so wonderful.

Researchers from the Department of Neuroscience at the Johns Hopkins University School of Medicine injected mice with a chemical that incorporates itself into newly dividing cells. They found that the chemical appeared in rapidly proliferating cells called tanycytes in the hypothalamus, and further tests confirmed that the tanycytes specifically produced new neurons and not other types of cells.

The research team then wanted to find out what these neurons do, so they studied the new hypothalamus neurons in mice that had been fed a high fat diet since birth. Since the hypothalamus is associated with hunger and metabolism, the team speculated that the neurons may be linked in some way to weight gain. Turns out, they were right.

At a very young age, the mice fed a high fat diet didn’t show a difference in neurogenesis from young mice fed a normal diet. But when they became adults, the mice fed a high fat diet showed four times the neurogenesis of the normal mice, and gained significantly more weight and had much higher fat mass.

To make sure that the new neurons were actually correlating with the weight gain, the researchers killed the neurons in some of the mice with focused X-rays. Those mice showed far lower weight gain and body fat than those fed the same high fat diet, and even lower than mice that were more active.

In other words, it’s clear that these neurons have a major impact on weight regulation and fat storage in mice — and it’s altogether possible the same holds true for us.

Further tests will have to be conducted to find out if that’s the case, but from an evolutionary standpoint it would make sense. Dr. Seth Blackshaw, the lead researcher, comments that hypothalamic neurogenesis may be a mechanism that evolved to help wild animals survive and probably also our ancestors. “Wild animals that find a rich and abundant source of food typically eat as much as possible as these foods are generally rare to find.”

But in a culture with an abundance of food, that formerly life-saving advantage can turn into a distinct disadvantage. Blackshaw explains, “In the case of the lab animals and also in people in developed countries who have an almost unlimited access to food, this neurogenesis is not at all beneficial as it potentially encourages unnecessary excessive weight gain and fat storage.” In short, our diets may be training our brains to keep us fat.

On the upside, if these findings are confirmed in humans, they may eventually lead to a drug that blocks neurogenesis in the hypothalamus — but we’re a long way from there.

The study was published in the journal Nature Neuroscience.


~ You can find me on Twitter @neuronarrative and at my website, The Daily Brain.

No comments: