Showing posts with label creativity. Show all posts
Showing posts with label creativity. Show all posts

Wednesday, September 24, 2014

Focused Attention, Open Monitoring, and Loving Kindness Meditation: Effects on Attention, Conflict Monitoring, and Creativity – A Review

http://neuroconscience.files.wordpress.com/2013/04/image2_meditationbrain.jpg

In this new mini review from Frontiers in Cognition, Lippelt, Hommel, and Colzato compare three meditation types (focused attention, open monitoring and loving kindness) in terms of their effects on attention, conflict monitoring, and creativity.

The three research areas the authors covered in this review (attentional control, performance monitoring, and creativity or thinking style) seem to imply the operation of extended neural networks, which might suggest that meditation operates on neural communication, perhaps by impacting neurotransmitter systems. They speculate:
Finally, it may be interesting to consider individual differences more systematically. If meditation really affects interactions between functional and neural networks, it makes sense to assume that the net effect of meditation of performance depends on the pre-experimental performance level of the individual—be it in terms of compensation (so that worse performers benefit more) or predisposition (so that some are more sensitive to meditation interventions).

Full Citation: 
Lippelt DP, Hommel B and Colzato LS. (2014, Sep 23). Focused attention, open monitoring and loving kindness meditation: effects on attention, conflict monitoring, and creativity – A review. Frontiers in Psychology: Cognition. 5:1083. doi: 10.3389/fpsyg.2014.01083

Focused attention, open monitoring and loving kindness meditation: effects on attention, conflict monitoring, and creativity – A review


Dominique P. Lippelt, Bernhard Hommel and Lorenza S. Colzato
  • Cognitive Psychology Unit, Institute for Psychological Research and Leiden Institute for Brain and Cognition, Leiden University, Leiden, Netherlands
Meditation is becoming increasingly popular as a topic for scientific research and theories on meditation are becoming ever more specific. We distinguish between what is called focused Attention meditation, open Monitoring meditation, and loving kindness (or compassion) meditation. Research suggests that these meditations have differential, dissociable effects on a wide range of cognitive (control) processes, such as attentional selection, conflict monitoring, divergent, and convergent thinking. Although research on exactly how the various meditations operate on these processes is still missing, different kinds of meditations are associated with different neural structures and different patterns of electroencephalographic activity. In this review we discuss recent findings on meditation and suggest how the different meditations may affect cognitive processes, and we give suggestions for directions of future research.

Introduction


Even though numerous studies have shown meditation to have significant effects on various affective and cognitive processes, many still view meditation as a technique primarily intended for relaxation and stress reduction. While meditation does seem to reduce stress and to induce a relaxing state of mind, it can also have significant effects on how people perceive and process the world around them and alter the way they regulate attention and emotion. Lutz et al. (2008) proposed that the kind of effect meditation has is likely to differ according to the kind of meditation that is practiced. Currently the most researched types of meditation include focused attention meditation (FAM), open monitoring meditation (OMM), and loving-kindness meditation (LKM). Unfortunately, however, the methodological diversity across the available studies with regard to sample characteristics, tasks used, and experimental design (within vs. between group; with vs. without control condition) renders the comparison between them difficult. This review is primarily focused on FAM and OMM studies1 and on how these two (proto-)types of meditation are associated with different neural underpinnings and differential effects on attentional control, conflict monitoring, and creativity.


Meditation Types



Usually, FAM is the starting point for any novice meditator (Lutz et al., 2008; Vago and Silbersweig, 2012). During FAM the practitioner is required to focus attention on a chosen object or event, such as breathing or a candle flame. To maintain this focus, the practitioner has to constantly monitor the concentration on the chosen event so to avoid mind wandering (Tops et al., 2014). Once practitioners become familiar with the FAM technique and can easily sustain their attentional focus on an object for a considerable amount of time, they often progress to OMM. During OMM the focus of the meditation becomes the monitoring of awareness itself (Lutz et al., 2008; Vago and Silbersweig, 2012). In contrast to FAM, there is no object or event in the internal or external environment that the meditator has to focus on. The aim is rather to stay in the monitoring state, remaining attentive to any experience that might arise, without selecting, judging, or focusing on any particular object. To start, however, the meditator will focus on a chosen object, as in FAM, but will subsequently gradually reduce this focus, while emphasizing the activity of monitoring of awareness.


Loving-kindness meditation incorporates elements of both FAM and OMM (Vago and Silbersweig, 2012). Meditators focus on developing love and compassion first for themselves and then gradually extend this love to ever more “unlikeable” others (e.g., from self to a friend, to someone one does not know, to all living beings one dislikes). Any negative associations that might arise are supposed to be replaced by positive ones such as pro-social or empathic concern.

Meditation Types, Attentional Scope, and Endogenous Attention


Whereas some meditation techniques require the practitioners to focus their attention on only a certain object or event, other techniques allow any internal or external experiences or sensations to enter awareness. Different meditation techniques might therefore bias the practitioner to either a narrow or broad spotlight of attention. This distinction is thought to be most evident with regard to FAM and OMM. FAM induces a narrow attentional focus due to the highly concentrative nature of the meditation, whereas OMM induces a broader attentional focus by allowing and acknowledging any experiences that might arise during meditation.

In a seminal study, Slagter et al. (2007) investigated the effects of 3 months of intensive Vipassana meditation (an OMM-like meditation) training on the allocation of attention over time as indexed by the “attentional-blink” (AB) deficit, thought to result from competition between two target stimuli (T1 and T2) for limited attentional resources. After the training, because of the acquisition of a broader attentional scope, participants showed a smaller AB deficit as an indication of being able to distribute their brain-resource allocation to both T1 and T2. The reduced AB size was accompanied by a smaller T1-elicited P3b, a brain-potential thought to index attentional resource allocation.

A more recent study comparing meditators (trained in mindfulness-based stress-reduction) to non-meditators found that meditators show evidence of more accurate and efficient visual attention (Hodgins and Adair, 2010). Meditators monitored events more accurately in a concentration task and showed less interference from invalid cues in a visual selective attention task. Furthermore, meditators showed improved flexible visual attention by identifying a greater number of alternative perspectives in multiple perspectives images. Another study compared OMM and FAM meditators on a sustained attention task (Valentine and Sweet, 1999): OMM meditators outperformed FAM meditators when the target stimulus was unexpected. This might indicate that the OMM meditators had a wider attentional scope, even though the two meditator groups did not differ in performance when the stimulus was expected.


Electrophysiological evidence for meditation-induced improvements in attention comes from a recent study in which Vipassana meditators performed an auditory oddball task before and after meditation (in one session) and random thinking (in another session; Delgado-Pastor et al., 2013). The meditation session was composed by three parts. First, an initial part of self-regulation of attention focused on sensations from air entering and leaving the body at the nostrils. Second, a central part of focusing attention on sensations from all parts of the body while maintaining the non-reactivity and acceptance attitude. Last, a final brief part aimed on generating feelings of compassion and unconditional love to all living beings. Meditators showed greater P3b amplitudes to target tones after meditation than either before meditation or after the no-meditation session, an effect that is thought to reflect enhanced attentional engagement during the task.

Support for the assumption that FAM induces a narrow attentional focus comes from several studies that show that FAM increases sustained attention (Carter et al., 2005; Brefczynski-Lewis et al., 2007). Neuroimaging evidence by Hasenkamp et al. (2012) suggests that FAM is associated with increased activity in the right dorsolateral prefrontal cortex (dlPFC), which has been associated with “the repetitive selection of relevant representations or recurrent direction of attention to those items” (D’Esposito, 2007, p. 765 ). Thus, in the context of meditation experience, dlPFC might be involved in repeatedly redirecting or sustaining attention to the object of focus. It would be interesting to investigate whether this pattern of activation is unique to FAM or whether other kinds of meditation lead to similar increases in activity in the dlPFC. If the dlPFC is indeed involved in the repetitive redirection of attention to the same object of focus, then it should not be as active during OMM during which attention is more flexible and continuously shifted to different objects. Alternatively, however, if during OMM the meditator achieves a state of awareness where (only) awareness itself is the object of focus, the dlPFC might again play a role in maintaining this focus. Similarly, it would be interesting to examine how LKM modulates attentional processes and the activation of the dlPFC.

In a follow-up study, Hasenkamp and Barsalou (2012) found that, during rest, the right dlPFC connectivity to the right insula was improved in experienced meditators compared to novices. The authors suggest that improved connectivity with the right insula might reflect enhanced interoceptive attention to internal bodily states. In a support of this idea, a recent study reports that mindfulness training predicted greater activity in posterior insula regions during interoceptive attention to respiratory sensation (Farb et al., 2013). Various studies have shown theta activity to be increased during meditation, primarily OMM-like meditations (e.g., Baijal and Srinivasan, 2010; Cahn et al., 2010; Tsai et al., 2013; for review see Travis and Shear, 2010). This increase in theta activity, usually mid-frontal, has been suggested to be involved in sustaining internalized attention. As such, similar increases in theta activity would be expected for LKM during which attention is also internalized, but not during FAM where attention is explicitly focused on an external object, even though typically the object of meditation in FAM, at least for beginners, is the breath, which is internal.


Additionally, active mindfulness meditation (versus rest) was associated with increased functional connectivity between the dorsal attention network, the Default Mode Network and the right prefrontal cortex (Froeliger et al., 2012). Thus, meditation practice seems to enhance connectivity within and between attentional networks and a number of broadly distributed other brain regions subserving attention, self-referential, and emotional processes.

Meditation Types and Conflict Monitoring


A fundamental skill acquired through meditation is the ability to monitor the attentional focus in order to “redirect it” in the case of conflicting thoughts or external events. Not surprisingly, several studies have already shown improvements in conflict monitoring after meditation. Tang et al. (2007) investigated whether a training technique based on meditational practices called integrative body-mind training (IBMT; most similar to OMM) could improve performance on an attentional network task (ANT; Fan et al., 2002). The ANT was developed to keep track of three different measures, namely orientation, alerting, and conflict resolution. While IBMT had no effect on orienting and alerting scores, it did improve conflict resolution. In a similar study FAM and OMM were compared on an emotional variant of the ANT. Both types of meditation improved conflict resolution compared to a relaxation control group (Ainsworth et al., 2013). Surprisingly, there was no difference between the two meditation types, even though, mindfulness disposition at baseline (i.e., trait mindfulness) was also associated with improved conflict resolution.


Further evidence for improvements in conflict monitoring come from a study investigating the effect of 6-week long FAM trainig (versus relaxation training and a waiting-list group) on a discrimination task intended to investigate the relationship between attentional load and emotional processing (Menezes et al., 2013). Participants had to respond to whether or not the orientation of two lines presented to either side of an emotionally distracting picture was the same. Importantly, those who underwent a meditation or relaxation training commited fewer errors than the waiting list control group. Furthermore, error rates were lowest in the meditation group, higest in the waiting list group, while the relaxation group scored in between. With regard to emotional regulation meditators showed less emotional interference than the other two groups when attentional load was low, and only meditators showed a relationship between the amount of weekly practice and reductions in emotional interference.


In a study of Xue et al. (2011), meditation-naïve participants were randomly assigned to either an 11 h IBMT course or a relaxation training. Compared to the relaxation training, the IBMT group showed higher network efficiency and degree of connectivity of the anterior cingulate cortex (ACC). As the ACC is involved in processes such as self-regulation, detecting interference and errors, and overcoming impasses (e.g., Botvinick et al., 2004), improvements in ACC functioning might well be the neural mechanism by which IBMT improves conflict resolution. In an interesting study of Hasenkamp et al. (2012), experienced meditators engaged in FAM inside an fMRI scanner and pushed a button whenever they started to mind-wander. The moment of awareness of mind-wandering was associated with increased activity in the dorsal ACC. Thus, as the mind starts to wander during meditation, the ACC might detect this “error” and feed it back to executive control networks (Botvinick et al., 1999; Carter and van Veen, 2007), so that attention can be refocused. Various other studies have also shown improvements in ACC functioning after meditation (Lazar et al., 2000; Baerentsen et al., 2001; Tang et al., 2009, 2010). Hölzel et al. (2007) compared experienced and novice meditators during a concentrative meditation (akin to FAM) and found that the experienced meditators showed greater activity in the rostral ACC during meditation than the novices, even though the two groups did not differ on an arithmetic control task. Similar results were obtained in another study comparing novices and experienced meditators (Baron Short et al., 2007) by showing more activity in the ACC during FAM compared to a control task. The activity in the ACC was more consistent and sustained for experienced meditators. Related to that, Buddhist monks exhibited more activity in the ACC during FAM than during OMM (Manna et al., 2010). This suggests that the effects of meditation on the ACC and conflict monitoring do not seem to be limited to temporary state effects but carry over into daily life as a more stable “trait.” Future large scale longitudinal studies should to be conducted to address this issue and to disentangle short-term and long-term effects on conflict monitoring.


Improved conflict monitoring does not necessarily entail increased brain activity. Kozasa et al. (2012) compared meditators and non-meditators on a Stroop task in which semantic associations of words have to be suppressed to retrieve the color of the word. While behavioral performance was not significantly different for the two groups, compared to meditators, the non-meditators showed more activity in brain regions related to attention and motor control during incongruent trials. Given that the aim of many meditation techniques is to monitor the automatic arise of distractible sensations, such skill may become effortless by repeated meditation, therefore leading to less brain activity during the Stroop task. LKM has been shown to improve conflict resolution, as well, when LKM and a control group were compared on a Stroop task. The LKM group was faster in responding to both congruent and incongruent trials, and the difference between congruent and incongruent trials (the congruency effect) was smaller as well (Hunsinger et al., 2013). As LKM incorporates elements of both FAM and OMM, it would be interesting to investigate how the effect size associated with LKM may be positioned in between FAM and OMM.


Recently, meditators and non-meditators were compared with regard to measures of cortical silent period and short intra cortical inhibition over the motor cortex before and after a 60 min long meditation (for the meditators) or cartoon (for the non-meditators), respectively, measuring GABAB receptor-mediated inhibitory neurotransmission and GABAA receptor-mediated inhibitory neurotransmission (Guglietti et al., 2013). Given that deficits related to cortical silent periods in the motor cortex had been previously associated with psychiatric illness and emotional deregulation, the activity over the motor cortex was measured. No differences were found between meditators and non-meditators before the meditation/cartoon. However, after meditation there was a significant increase in GABAB activity in the meditator group. The authors suggest that “improved cortical inhibition of the motor cortex, through meditation, helps reduce perceptions of environmental threat and negative affect through top down modulation of excitatory neural activity” (Guglietti et al., 2013, p. 400). Future research might investigate whether similar GABA related mechanisms underlie the suppression of distracting stimuli during meditation and how different types of meditation might have distinguishable effects on these processes.

Meditation Types and Creativity


The scientific evidence regarding the connection between meditation and creativity is inconsistent. While some studies support a strong positive impact of meditation practice on creativity (Orme-Johnson and Granieri, 1977; Orme-Johnson et al., 1977), others found only a weak association or no effect at all (Cowger, 1974; Domino, 1977). Recently, Zabelina et al. (2011) found that a short-term effect of mindfulness manipulation (basically OMM) facilitated creative elaboration at high levels of neuroticism. As pointed out by Colzato et al. (2012), these inconsistencies might reflect a failure to distinguish between different and dissociable processes underlying creativity, such as convergent and divergent thinking (Guilford, 1950). Accordingly, Colzato et al. (2012) compared the impact of FAM and OMM on convergent thinking (a process of identifying one “correct” answer to a well-defined problem) and divergent thinking (a process aiming at generating many new ideas) in meditation practitioners. Indeed, the two types of meditation affected the two types of thinking in opposite ways: while convergent thinking tended to improve after FAM, divergent thinking was significantly enhanced after OMM. Colzato et al. (2012) suggest that FAM and OMM induce two different, to some degree opposite cognitive-control states that support state-compatible thinking styles, such as convergent and divergent thinking, respectively. In contrast to convergent thinking, divergent thinking benefits from a control state that promotes quick “jumps” from one thought to another by reducing the top-down control of cognitive processing—as achieved by OMM.


Conclusion



Research on meditation is still in its infancy but our understanding of the underlying functional and neural mechanisms is steadily increasing. However, a serious shortcoming in the current literature is the lack of studies that systematically distinguish between and compare different kinds of meditation on various cognitive, affective or executive control tasks—a criticism that applies to neuroscientific studies in particular. Further progress will require a better understanding of the functional aims of particular meditation techniques and their strategies to achieve them. It will also be important to more systematically assess short- and long-term effects of meditation, as well as the (not yet understood) impact of meditation experience (as present in practitioners but not novices). For instance, several approaches (like Buddhism) favor a particular sequence of acquiring meditation skills (from FAM to OMM) but evidence that this sequence actually matters is lacking. Moreover, the neural mechanisms underlying meditation effects are not well understood. It might be interesting that the three main research topics we have covered in the present review (attentional control, performance monitoring, and creativity or thinking style) imply the operation of extended neural networks, which might suggest that meditation operates on neural communication, perhaps by impacting neurotransmitter systems. Finally, it may be interesting to consider individual differences more systematically. If meditation really affects interactions between functional and neural networks, it makes sense to assume that the net effect of meditation of performance depends on the pre-experimental performance level of the individual—be it in terms of compensation (so that worse performers benefit more) or predisposition (so that some are more sensitive to meditation interventions).

Conflict of Interest Statement


The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


Footnotes


  1. ^ It is important to note that even though this mini review is based on the theoretical framework of distinguishing FAM and OMM, another one includes the distinction between concentrative meditations, practices that regulate or control attention/awareness, and meditation practices which instead do not explicitly target attentional/effortful control (Chiesa and Malinowski, 2011; see also Chiesa, 2012 for a recent review on the difficulty of defining Mindfulness). Moreover, Travis and Shear (2010) have pointed out a third meditation category besides FAM and OMM: the automatic self-transcending which trascends FAM and OMM through the absence of both (a) focus and (b) individual control or effort.

References at the Frontiers site

Tuesday, August 12, 2014

Nova - Evolution: The Minds Big Bang

tree background



This video is Part 6 of a 7-Part NOVA series on evolution, narrated by Liam Neeson. In this episode, they look at the emergence of mind that may have given early homo sapiens the edge over neanderthals.

Evolution: The Minds Big Bang

2001 original air date

Anatomically modern humans existed more than 100,000 years ago, but with crude technology, no art, and primitive social interaction. By 50,000 years ago, something happened which triggered a creative, technological, and social explosion—and humans came to dominate the planet. This was a pivotal point in our evolution, the time when the human mind truly emerged. This program examines forces that may have contributed to the breakthrough, allowing us to prevail over other hominids, the Neanderthals, who co-existed with us for tens of thousands of years. The film then explores where this power of mind may lead us, as the culture we create overtakes our own biological evolution.

Tuesday, July 29, 2014

PsyBlog - 10 Remarkable Ways Nature Can Heal Your Mind


It seems to have taken a while to discover the obvious, but science is finally beginning to recognize how important it is for human beings to stay connected to nature, even in small ways, to maintain our physical and mental health. Until a few hundred years ago, most of humanity lived with an intimate connection to nature and the Earth - as it should be.

Here are a few previous news articles/studies on this topic.
The brief article below summarizes 10 specific ways that spending time in nature can improve mental and physical health.

10 Remarkable Ways Nature Can Heal Your Mind

Post image for 10 Remarkable Ways Nature Can Heal Your Mind


People now spend up to 25% less time enjoying nature than they did 20 years ago. What is that doing to our minds?

People are spending less and less time enjoying the outdoors and nature with every passing year.
 
The recent shift away from nature has been incredible: some studies estimate people now spend 25% less time in nature than they did 20 years ago (Pergams & Zaradic, 2007).

Instead, recreational time is often spent surfing the internet, playing video games and watching movies.

This is a pity not merely because of the physical benefits of being outside, but also because of the psychological benefits.

Here are 10 of the most remarkable ways in which being outside, in nature, can heal the mind.

1. Feel more alive

Being inside all the time gives you a dead, flat feeling.

Being in nature, though, makes people feel more alive, which several studies have confirmed (Ryan et al., 2010).

It’s not just about the extra amount of exercise people get when they’re in nature, it has its own special effect.

Nature itself genuinely makes people feel happier, more healthy and more energetic.

Professor Richard Ryan, who has studied how nature benefits the mind, said:
“Nature is fuel for the soul.
Often when we feel depleted we reach for a cup of coffee, but research suggests a better way to get energized is to connect with nature.”
And this extra vitality has all sorts of knock-on benefits:
“Research has shown that people with a greater sense of vitality don’t just have more energy for things they want to do, they are also more resilient to physical illnesses.
One of the pathways to health may be to spend more time in natural settings.”

2. 50% more creative

Going into nature for an extended period can have remarkable effects on creativity.

A recent study had participants take a four- or six-day trip into the wilderness.

Their study showed that…
“…four days of immersion in nature, and the corresponding disconnection from multimedia and technology, increases performance on a creativity, problem-solving task by a full 50 percent,” (Atchley et al., 2012)
Why does it work? The psychologists explained:
“Our modern society is filled with sudden events (sirens, horns, ringing phones, alarms, television, etc.) that hijack attention.
By contrast, natural environments are associated with gentle, soft fascination, allowing the executive attentional system to replenish.” (Atchley et al., 2012)

3. Reduce acute stress

The Japanese are big fans of walking in the forest to promote their mental health.

The practice is called shinrin-yoku, which literally means ‘forest bathing’.

One study conducted by Japanese researchers has found that the practice is particularly useful for those suffering acute stress (Morita et al., 2006).

Their study of 498 people found that shinrin-yoku reduced hostility and depression as well as increasing people’s liveliness compared to comparable control groups.

nature3

4. Ease dementia symptoms

Gardens in care homes may have therapeutic benefits for those suffering from dementia, according to a review of 17 separate studies (Whear et al., 2014).

Researchers at the University of Exeter Medical School found that gardens reduced patients’ agitation, encouraged activity and promoted relaxation.

The study’s lead author, Rebecca Whear, said:
“There is an increasing interest in improving dementia symptoms without the use of drugs.
We think that gardens could be benefiting dementia sufferers by providing them with sensory stimulation and an environment that triggers memories.
They not only present an opportunity to relax in a calming setting, but also to remember skills and habits that have brought enjoyment in the past.”

5. Improve memory

Short-term memory can be improved 20% by walking in nature, or even just by looking at an image of a natural scene.

Marc G. Berman and colleagues at the University of Michigan wanted to test the effect of natural scenery on cognitive function (Berman, Jonides & Kaplan, 2008).
In the first of two studies participants were given a 35 minute task involving repeating loads of random numbers back to the experimenter, but in reverse order.

After this they were sent out for a walk – one group around an arboretum and the other down a busy city street – both while being tracked with GPS devices.

They each repeated the memory test when they got back.

The results showed that people’s performance on the test improved by almost 20% after wandering amongst the trees. By comparison those subjected to a busy street did not improve.

6. Greater sense of belonging

A small study of 10 children from a mostly Christian background found that those who spent more time outside felt more humbled by nature’s power as well as feeling a sense of belonging in the world.

Being outdoors more also enhanced the children’s appreciation of beauty.

These children took greater notice of colour, symmetry and balance in nature as well as displaying greater imagination and curiosity themselves.

The study’s lead author, Gretel Van Wieren, commented:
“This is the first generation that’s significantly plugged in to a different extent and so what does this mean?
Modern life has created a distance between humans and nature that now we’re realizing isn’t good in a whole host of ways.
So it’s a scary question: How will this affect our children and how are we going to respond?”
bryant_park

7. Urban mental health boost

There is hope for those who live in cities.

The benefits from nature to people’s mental health aren’t restricted those who live in the countryside.

Moving to a greener urban area boosts mental health for at least three years.

The lead author Ian Alcock said:
“We’ve shown that individuals who move to greener areas have significant and long-lasting improvements in mental health.
These findings are important for urban planners thinking about introducing new green spaces to our towns and cities, suggesting they could provide long term and sustained benefits for local communities.”

8. Increase self-esteem

All kinds of exercise in nature can boost your self-esteem. And it’s surprising how little you have to do to get the boost.

One review analysed data from 1,252 people who took part in 10 different studies (Barton & Pretty, 2010).

People’s activities varied considerably, including things like gardening, walking, cycling, boating, fishing and horse-riding.

The study found that just 5 minutes ‘green exercise’ gave the largest boost to self-esteem.

9. Improve ADHD symptoms

Children with Attention Deficit Hyperactivity Disorder who play more outside have less severe symptoms, according to research.

Talylor and Kuo (2011) found that amongst 400 children diagnosed with ADHD, those that routinely played outside in green settings had better concentration.

Not only that but they were usually calmer, relaxed and happier.

The study even found that children who sat indoors looking out at a green space did better than those who were outside, but in a man-made environment without trees or grass.

That’s the power of the green spaces.

10. Help your brain work in sync

Tranquil natural scenes, like a seascape, cause vital areas of the brain to work in sync, according to researchers at the University of Sheffield (Hunter et al., 2010).

By contrast, man-made environments like roads disrupt connections within the brain.
Dr Michael Hunter, who lead the research, said:
“People experience tranquillity as a state of calmness and reflection, which is restorative compared with the stressful effects of sustained attention in day-to-day life.
It is well known that natural environments induce feelings of tranquillity whereas man-made, urban environments are experienced as non-tranquil."

Into the light…

As William Wordsworth put it:
“Come forth into the light of things, let nature be your teacher.”
Image credit: Ruben Alexander & Cedric Lange & Trey Ratcliff

Friday, July 04, 2014

Brandon Keim - Evolution’s Contrarian Capacity for Creativity

From Nautilus, Facts So Romantic on Biology, this is an interesting article on creativity in evolution. The author begins with two small songbirds - small songbird known as the willow tit, closely related—Poecile montanus to Poecile atricapillus—to the black-capped chickadee.
To the naked eye, there’s not much to distinguish between them. Both are small, with black-and-white heads and gray-black wings, seed-cracking bills, and a gregarious manner. For a long time, they were even thought to be the same species. The only obvious difference, at least with the willow tit I saw, was a duskier olive underbelly coloration.
These two nearly identical birds are a nice jumping off place for a discussion of diversity in evolution.

Evolution’s Contrarian Capacity for Creativity

Posted By Brandon Keim on Jul 02, 2014


The easily confused willow tit and black-capped chickadee f.c.franklin via Flickr / Brandon Keim

ONE OF MY favorite pastimes while traveling is watching birds. Not rare birds, mind you, but common ones: local variations on universal themes of sparrow and chickadee, crow and mockingbird.

I enjoy them in the way that other people appreciate new food or architecture or customs, and it can be a strange habit to explain. You’re 3,000 miles from home, and less interested in a famous statue than the pigeon on its head?! Yet there’s something powerfully fascinating about how familiar essences take on slightly unfamiliar forms; an insight, even, into the miraculous essence of life, a force capable of resisting the universe’s otherwise inevitable tendency to come to rest.

Take, for example, a small songbird known as the willow tit, encountered on a recent trip to Finland and closely related—Poecile montanus to Poecile atricapillus—to the black-capped chickadee, the official bird of my home state of Maine. To the naked eye, there’s not much to distinguish between them. Both are small, with black-and-white heads and gray-black wings, seed-cracking bills, and a gregarious manner. For a long time, they were even thought to be the same species. The only obvious difference, at least with the willow tit I saw, was a duskier olive underbelly coloration.

Which raises a question, asked by Darwin and J. B. S. Haldane and generations of biologists since: Why? Why is a bird, similar in so many ways to another, different in this one? It’s a surprisingly tricky question.

Generally speaking, we tend to think of evolution in purposeful terms: There must be a reason for difference, an explanation grounded in the chances of passing on one’s supposedly selfish genes. Perhaps those olive feathers provide a better camouflage in amidst Finnish vegetation, or have come to signify virility in that part of the world. As evolutionary biologists Suzanne Gray and Jeffrey McKinnon describe in Trends in Ecology and Evolution review (pdf), differences in color are sometimes favored by natural selection—except, that is, when they’re not.

Often differences in color don’t have any function at all; they just happen to be. They emerge through what’s known as neutral evolution: mutations randomly spreading through populations. At times, this spread, this genetic drift, evenly distributes throughout the entire population, so the whole species changes together. Sometimes, though, the mutations confine themselves to different clusters within a species, like blobs of water cohering on a shower floor. 
One can imagine life evolving again and again, crashing on the rocks of time and circumstance, until finally it hit upon just the right mutation rate—one that eons later would produce organisms and species and ecosystems.
Given enough time and space, these processes can—at least theoretically, as experiments necessary for conclusive evidence would take millennia to run—generate new species. Such appears to be the case with greenish warblers living around the Tibetan plateau, who during the last 10,000 years have diverged into multiple, non-interbreeding populations, even though there are no geographic barriers separating them or evidence of local adaptations favored by natural selection. The raw material of life simply diversified. One became many, because that’s just what it does1.

Through this lens, evolution is an intrinsically generative force, with diversity proceeding ineluctably from the very existence of mutation. And here one can step back for a moment, go all meta, and ask: Where does mutation itself come from? How did evolution, and evolvability, evolve?

It’s a question on the bleeding edge of theoretical biology, and one studied by Joanna Masel at the University of Arizona. Her work suggests that, several billion years ago, when life consisted of self-replicating chemical arrangements, a certain amount of mutation was useful: After all, it made adaptation possible, if merely at the level of organized molecules persisting in gradients of heat and chemistry. There couldn’t be too much of it, though. If there were, the very mechanics of replication would break down.

Molecular biologist Irene Chen of the University of California, Santa Barbara, has further illuminated that delicate balance. Her work posits that, as an information storage system, DNA was less error-prone than RNA, its single-stranded molecular forerunner and the key material of the so-called RNA world thought to have preceded life as we now know it.

So, then, one can imagine, early in Earth’s history, life evolving again and again, crashing on the rocks of time and circumstance, until finally it hit upon just the right mutation rate—one that eons later would produce organisms and species and ecosystems that reproduce themselves and persist across time and chancellor.

That’s the remarkable thing about life: It continues. It keeps going and growing. Barring catastrophic asteroid strikes, or possibly the exponential population growth of a certain bipedal, big-brained hominid, life on Earth maintains complexity, actually increases it, even as the natural tendency of systems is to become simpler. Clocks unwind, suns run down, individual lives end, the Universe itself heads towards its own cold, motionless death; such is the Second Law of Thermodynamics, inviolable and inescapable.

Yet so long as Earth’s sun shines and genetic mutations arise, evolution may maintain its own thermodynamic law. Black-capped chickadees and willow tits diverge. Life pushes back. 

Footnote 
1. To be sure, the concept of neutrally driven biodiversity isn’t universally accepted. There may be subtle, intrinsic advantages to diversification. An example comes from the models of James O’Dwyer, a theoretical ecologist at the University of Illinois: Simply by virtue of their novelty, new species may be intrinsically less vulnerable to pathogens that afflicted their evolutionary parents.
So, then, perhaps willow tits and black-capped chickadees evolved slightly different feather patterns because they provided some immediate, direct benefit; or maybe it happened just because, for no reason at all, really; or maybe there was a subtle benefit intrinsic to the process of variation itself; or maybe it was some combination of all three, varying by time and place.
So, it’s complicated. But whatever the complications, all these processes share something very fundamental: the emergence of variety over time as life’s essential property.

Brandon Keim (@9brandon) is a freelance journalist specializing in science, environment, and culture.

Tuesday, July 01, 2014

Nancy Andreasen - Secrets of the Creative Brain


Here is an excellent article on creativity and the brain from The Atlantic. This is one of the more comprehensive and interesting articles on creativity I have seen in the mainstream press. It's well worth the read.

Secrets of the Creative Brain

A leading neuroscientist who has spent decades studying creativity shares her research on where genius comes from, whether it is dependent on high IQ—and why it is so often accompanied by mental illness.

Nancy Andreasen | June 25, 2014

AS A PSYCHIATRIST and neuroscientist who studies creativity, I’ve had the pleasure of working with many gifted and high-profile subjects over the years, but Kurt Vonnegut—dear, funny, eccentric, lovable, tormented Kurt Vonnegut—will always be one of my favorites. Kurt was a faculty member at the Iowa Writers’ Workshop in the 1960s, and participated in the first big study I did as a member of the university’s psychiatry department. I was examining the anecdotal link between creativity and mental illness, and Kurt was an excellent case study.

He was intermittently depressed, but that was only the beginning. His mother had suffered from depression and committed suicide on Mother’s Day, when Kurt was 21 and home on military leave during World War II. His son, Mark, was originally diagnosed with schizophrenia but may actually have bipolar disorder. (Mark, who is a practicing physician, recounts his experiences in two books, The Eden Express and Just Like Someone Without Mental Illness Only More So, in which he reveals that many family members struggled with psychiatric problems. “My mother, my cousins, and my sisters weren’t doing so great,” he writes. “We had eating disorders, co-dependency, outstanding warrants, drug and alcohol problems, dating and employment problems, and other ‘issues.’ ”)

While mental illness clearly runs in the Vonnegut family, so, I found, does creativity. Kurt’s father was a gifted architect, and his older brother Bernard was a talented physical chemist and inventor who possessed 28 patents. Mark is a writer, and both of Kurt’s daughters are visual artists. Kurt’s work, of course, needs no introduction.

For many of my subjects from that first study—all writers associated with the Iowa Writers’ Workshop—mental illness and creativity went hand in hand. This link is not surprising. The archetype of the mad genius dates back to at least classical times, when Aristotle noted, “Those who have been eminent in philosophy, politics, poetry, and the arts have all had tendencies toward melancholia.” This pattern is a recurring theme in Shakespeare’s plays, such as when Theseus, in A Midsummer Night’s Dream, observes, “The lunatic, the lover, and the poet / Are of imagination all compact.” John Dryden made a similar point in a heroic couplet: “Great wits are sure to madness near allied, / And thin partitions do their bounds divide.”

Compared with many of history’s creative luminaries, Vonnegut, who died of natural causes, got off relatively easy. Among those who ended up losing their battles with mental illness through suicide are Virginia Woolf, Ernest Hemingway, Vincent van Gogh, John Berryman, Hart Crane, Mark Rothko, Diane Arbus, Anne Sexton, and Arshile Gorky.

My interest in this pattern is rooted in my dual identities as a scientist and a literary scholar. In an early parallel with Sylvia Plath, a writer I admired, I studied literature at Radcliffe and then went to Oxford on a Fulbright scholarship; she studied literature at Smith and attended Cambridge on a Fulbright. Then our paths diverged, and she joined the tragic list above. My curiosity about our different outcomes has shaped my career. I earned a doctorate in literature in 1963 and joined the faculty of the University of Iowa to teach Renaissance literature. At the time, I was the first woman the university’s English department had ever hired into a tenure-track position, and so I was careful to publish under the gender-neutral name of N. J. C. Andreasen.

Not long after this, a book I’d written about the poet John Donne was accepted for publication by Princeton University Press. Instead of feeling elated, I felt almost ashamed and self-indulgent. Who would this book help? What if I channeled the effort and energy I’d invested in it into a career that might save people’s lives? Within a month, I made the decision to become a research scientist, perhaps a medical doctor. I entered the University of Iowa’s medical school, in a class that included only five other women, and began working with patients suffering from schizophrenia and mood disorders. I was drawn to psychiatry because at its core is the most interesting and complex organ in the human body: the brain.

I have spent much of my career focusing on the neuroscience of mental illness, but in recent decades I’ve also focused on what we might call the science of genius, trying to discern what combination of elements tends to produce particularly creative brains. What, in short, is the essence of creativity? Over the course of my life, I’ve kept coming back to two more-specific questions: What differences in nature and nurture can explain why some people suffer from mental illness and some do not? And why are so many of the world’s most creative minds among the most afflicted? My latest study, for which I’ve been scanning the brains of some of today’s most illustrious scientists, mathematicians,
artists, and writers, has come closer to answering this second question than any other research to date.


 
THE FIRST ATTEMPTED EXAMINATIONS of the connection between genius and insanity were largely anecdotal. In his 1891 book, The Man of Genius, Cesare Lombroso, an Italian physician, provided a gossipy and expansive account of traits associated with genius—left-handedness, celibacy, stammering, precocity, and, of course, neurosis and psychosis—and he linked them to many creative individuals, including Jean-Jacques Rousseau, Sir Isaac Newton, Arthur Schopenhauer, Jonathan Swift, Charles Darwin, Lord Byron, Charles Baudelaire, and Robert Schumann. Lombroso speculated on various causes of lunacy and genius, ranging from heredity to urbanization to climate to the phases of the moon. He proposed a close association between genius and degeneracy and argued that both are hereditary. Francis Galton, a cousin of Charles Darwin, took a much more rigorous approach to the topic. In his 1869 book, Hereditary Genius, Galton used careful documentation—including detailed family trees showing the more than 20 eminent musicians among the Bachs, the three eminent writers among the Brontës, and so on—to demonstrate that genius appears to have a strong genetic component. He was also the first to explore in depth the relative contributions of nature and nurture to the development of genius.

As research methodology improved over time, the idea that genius might be hereditary gained support. For his 1904 Study of British Genius, the English physician Havelock Ellis twice reviewed the 66 volumes of The Dictionary of National Biography. In his first review, he identified individuals whose entries were three pages or longer. In his second review, he eliminated those who “displayed no high intellectual ability” and added those who had shorter entries but showed evidence of “intellectual ability of high order.” His final list consisted of 1,030 individuals, only 55 of whom were women. Much like Lombroso, he examined how heredity, general health, social class, and other factors may have contributed to his subjects’ intellectual distinction. Although Ellis’s approach was resourceful, his sample was limited, in that the subjects were relatively famous but not necessarily highly creative. He found that 8.2 percent of his overall sample of 1,030 suffered from melancholy and 4.2 percent from insanity. Because he was relying on historical data provided by the authors of The Dictionary of National Biography rather than direct contact, his numbers likely underestimated the prevalence of mental illness in his sample.

A more empirical approach can be found in the early-20th-century work of Lewis M. Terman, a Stanford psychologist whose multivolume Genetic Studies of Genius is one of the most legendary studies in American psychology. He used a longitudinal design—meaning he studied his subjects repeatedly over time—which was novel then, and the project eventually became the longest-running longitudinal study in the world. Terman himself had been a gifted child, and his interest in the study of genius derived from personal experience. (Within six months of starting school, at age 5, Terman was advanced to third grade—which was not seen at the time as a good thing; the prevailing belief was that precocity was abnormal and would produce problems in adulthood.) Terman also hoped to improve the measurement of “genius” and test Lombroso’s suggestion that it was associated with degeneracy.

In 1916, as a member of the psychology department at Stanford, Terman developed America’s first IQ test, drawing from a version developed by the French psychologist Alfred Binet. This test, known as the Stanford-Binet Intelligence Scales, contributed to the development of the Army Alpha, an exam the American military used during World War I to screen recruits and evaluate them for work assignments and determine whether they were worthy of officer status.

Terman eventually used the Stanford-Binet test to select high-IQ students for his longitudinal study, which began in 1921. His long-term goal was to recruit at least 1,000 students from grades three through eight who represented the smartest 1 percent of the urban California population in that age group. The subjects had to have an IQ greater than 135, as measured by the Stanford-Binet test. The recruitment process was intensive: students were first nominated by teachers, then given group tests, and finally subjected to individual Stanford-Binet tests. After various enrichments—adding some of the subjects’ siblings, for example—the final sample consisted of 856 boys and 672 girls. One finding that emerged quickly was that being the youngest student in a grade was an excellent predictor of having a high IQ. (This is worth bearing in mind today, when parents sometimes choose to hold back their children precisely so they will not be the youngest in their grades.)

These children were initially evaluated in all sorts of ways. Researchers took their early developmental histories, documented their play interests, administered medical examinations—including 37 different anthropometric measurements—and recorded how many books they’d read during the past two months, as well as the number of books available in their homes (the latter number ranged from zero to 6,000, with a mean of 328). These gifted children were then reevaluated at regular intervals throughout their lives.

“The Termites,” as Terman’s subjects have come to be known, have debunked some stereotypes and introduced new paradoxes. For example, they were generally physically superior to a comparison group—taller, healthier, more athletic. Myopia (no surprise) was the only physical deficit. They were also more socially mature and generally better adjusted. And these positive patterns persisted as the children grew into adulthood. They tended to have happy marriages and high salaries. So much for the concept of “early ripe and early rotten,” a common assumption when Terman was growing up.

But despite the implications of the title Genetic Studies of Genius, the Termites’ high IQs did not predict high levels of creative achievement later in life. Only a few made significant creative contributions to society; none appear to have demonstrated extremely high creativity levels of the sort recognized by major awards, such as the Nobel Prize. (Interestingly, William Shockley, who was a 12-year-old Palo Alto resident in 1922, somehow failed to make the cut for the study, even though he would go on to share a Nobel Prize in physics for the invention of the transistor.) Thirty percent of the men and 33 percent of the women did not even graduate from college. A surprising number of subjects pursued humble occupations, such as semiskilled trades or clerical positions. As the study evolved over the years, the term gifted was substituted for genius. Although many people continue to equate intelligence with genius, a crucial conclusion from Terman’s study is that having a high IQ is not equivalent to being highly creative. Subsequent studies by other researchers have reinforced Terman’s conclusions, leading to what’s known as the threshold theory, which holds that above a certain level, intelligence doesn’t have much effect on creativity: most creative people are pretty smart, but they don’t have to be that smart, at least as measured by conventional intelligence tests. An IQ of 120, indicating that someone is very smart but not exceptionally so, is generally considered sufficient for creative genius.


 
Kyle Bean

BUT IF HIGH IQ does not indicate creative genius, then what does? And how can one identify creative people for a study?

One approach, which is sometimes referred to as the study of “little c,” is to develop quantitative assessments of creativity—a necessarily controversial task, given that it requires settling on what creativity actually is. The basic concept that has been used in the development of these tests is skill in “divergent thinking,” or the ability to come up with many responses to carefully selected questions or probes, as contrasted with “convergent thinking,” or the ability to come up with the correct answer to problems that have only one answer. For example, subjects might be asked, “How many uses can you think of for a brick?” A person skilled in divergent thinking might come up with many varied responses, such as building a wall; edging a garden; and serving as a bludgeoning weapon, a makeshift shot put, a bookend. Like IQ tests, these exams can be administered to large groups of people. Assuming that creativity is a trait everyone has in varying amounts, those with the highest scores can be classified as exceptionally creative and selected for further study.

While this approach is quantitative and relatively objective, its weakness is that certain assumptions must be accepted: that divergent thinking is the essence of creativity, that creativity can be measured using tests, and that high-scoring individuals are highly creative people. One might argue that some of humanity’s most creative achievements have been the result of convergent thinking—a process that led to Newton’s recognition of the physical formulae underlying gravity, and Einstein’s recognition that E=mc2.

A second approach to defining creativity is the “duck test”: if it walks like a duck and quacks like a duck, it must be a duck. This approach usually involves selecting a group of people—writers, visual artists, musicians, inventors, business innovators, scientists—who have been recognized for some kind of creative achievement, usually through the awarding of major prizes (the Nobel, the Pulitzer, and so forth). Because this approach focuses on people whose widely recognized creativity sets them apart from the general population, it is sometimes referred to as the study of “big C.” The problem with this approach is its inherent subjectivity. What does it mean, for example, to have “created” something? Can creativity in the arts be equated with creativity in the sciences or in business, or should such groups be studied separately? For that matter, should science or business innovation be considered creative at all?

Although I recognize and respect the value of studying “little c,” I am an unashamed advocate of studying “big C.” I first used this approach in the mid-1970s and 1980s, when I conducted one of the first empirical studies of creativity and mental illness. Not long after I joined the psychiatry faculty of the Iowa College of Medicine, I ran into the chair of the department, a biologically oriented psychiatrist known for his salty language and male chauvinism. “Andreasen,” he told me, “you may be an M.D./Ph.D., but that Ph.D. of yours isn’t worth sh--, and it won’t count favorably toward your promotion.” I was proud of my literary background and believed that it made me a better clinician and a better scientist, so I decided to prove him wrong by using my background as an entry point to a scientific study of genius and insanity.

The University of Iowa is home to the Writers’ Workshop, the oldest and most famous creative-writing program in the United States (UNESCO has designated Iowa City as one of its seven “Cities of Literature,” along with the likes of Dublin and Edinburgh). Thanks to my time in the university’s English department, I was able to recruit study subjects from the workshop’s ranks of distinguished permanent and visiting faculty. Over the course of 15 years, I studied not only Kurt Vonnegut but Richard Yates, John Cheever, and 27 other well-known writers.


 
The writer Kurt Vonnegut came from a family with a long history of mental illness—and exceptional creativity. Above: Vonnegut (right) meets with Hollywood producer Mark Robson in 1971. (AP)

Going into the study, I keyed my hypotheses off the litany of famous people who I knew had personal or family histories of mental illness. James Joyce, for example, had a daughter who suffered from schizophrenia, and he himself had traits that placed him on the schizophrenia spectrum. (He was socially aloof and even cruel to those close to him, and his writing became progressively more detached from his audience and from reality, culminating in the near-psychotic neologisms and loose associations of Finnegans Wake.) Bertrand Russell, a philosopher whose work I admired, had multiple family members who suffered from schizophrenia. Einstein had a son with schizophrenia, and he himself displayed some of the social and interpersonal ineptitudes that can characterize the illness. Based on these clues, I hypothesized that my subjects would have an increased rate of schizophrenia in family members but that they themselves would be relatively well. I also hypothesized that creativity might run in families, based on prevailing views that the tendencies toward psychosis and toward having creative and original ideas were closely linked.

I began by designing a standard interview for my subjects, covering topics such as developmental, social, family, and psychiatric history, and work habits and approach to writing. Drawing on creativity studies done by the psychiatric epidemiologist Thomas McNeil, I evaluated creativity in family members by assigning those who had had very successful creative careers an A++ rating and those who had pursued creative interests or hobbies an A+.

My final challenge was selecting a control group. After entertaining the possibility of choosing a homogeneous group whose work is not usually considered creative, such as lawyers, I decided that it would be best to examine a more varied group of people from a mixture of professions, such as administrators, accountants, and social workers. I matched this control group with the writers according to age and educational level. By matching based on education, I hoped to match for IQ, which worked out well; both the test and the control groups had an average IQ of about 120. These results confirmed Terman’s findings that creative genius is not the same as high IQ. If having a very high IQ was not what made these writers creative, then what was?

As I began interviewing my subjects, I soon realized that I would not be confirming my schizophrenia hypothesis. If I had paid more attention to Sylvia Plath and Robert Lowell, who both suffered from what we today call mood disorder, and less to James Joyce and Bertrand Russell, I might have foreseen this. One after another, my writer subjects came to my office and spent three or four hours pouring out the stories of their struggles with mood disorder—mostly depression, but occasionally bipolar disorder. A full 80 percent of them had had some kind of mood disturbance at some time in their lives, compared with just 30 percent of the control group—only slightly less than an age-matched group in the general population. (At first I had been surprised that nearly all the writers I approached would so eagerly agree to participate in a study with a young and unknown assistant professor—but I quickly came to understand why they were so interested in talking to a psychiatrist.) The Vonneguts turned out to be representative of the writers’ families, in which both mood disorder and creativity were overrepresented—as with the Vonneguts, some of the creative relatives were writers, but others were dancers, visual artists, chemists, architects, or mathematicians. This is consistent with what some other studies have found. When the psychologist Kay Redfield Jamison looked at 47 famous writers and artists in Great Britain, she found that more than 38 percent had been treated for a mood disorder; the highest rates occurred among playwrights, and the second-highest among poets. When Joseph Schildkraut, a psychiatrist at Harvard Medical School, studied a group of 15 abstract-expressionist painters in the mid-20th century, he found that half of them had some form of mental illness, mostly depression or bipolar disorder; nearly half of these artists failed to live past age 60.


 
The brain of a genius: After completing her analysis of a creative person, the author provides the subject with a 3‐D model of his or her brain. (Mike Basher)

WHILE MY WORKSHOP STUDY answered some questions, it raised others. Why does creativity run in families? What is it that gets transmitted? How much is due to nature and how much to nurture? Are writers especially prone to mood disorders because writing is an inherently lonely and introspective activity? What would I find if I studied a group of scientists instead?

These questions percolated in my mind in the weeks, months, and eventually years after the study. As I focused my research on the neurobiology of severe mental illnesses, including schizophrenia and mood disorders, studying the nature of creativity—important as the topic was and is—seemed less pressing than searching for ways to alleviate the suffering of patients stricken with these dreadful and potentially lethal brain disorders. During the 1980s, new neuroimaging techniques gave researchers the ability to study patients’ brains directly, an approach I began using to answer questions about how and why the structure and functional activity of the brain is disrupted in some people with serious mental illnesses.

As I spent more time with neuroimaging technology, I couldn’t help but wonder what we would find if we used it to look inside the heads of highly creative people. Would we see a little genie that doesn’t exist inside other people’s heads?

Today’s neuroimaging tools show brain structure with a precision approximating that of the examination of post-mortem tissue; this allows researchers to study all sorts of connections between brain measurements and personal characteristics. For example, we know that London taxi drivers, who must memorize maps of the city to earn a hackney’s license, have an enlarged hippocampus—a key memory region—as demonstrated in a magnetic-resonance-imaging, or MRI, study. (They know it, too: on a recent trip to London, I was proudly regaled with this information by several different taxi drivers.) Imaging studies of symphony-orchestra musicians have found them to possess an unusually large Broca’s area—a part of the brain in the left hemisphere that is associated with language—along with other discrepancies. Using another technique, functional magnetic resonance imaging (fMRI), we can watch how the brain behaves when engaged in thought.

Designing neuroimaging studies, however, is exceedingly tricky. Capturing human mental processes can be like capturing quicksilver. The brain has as many neurons as there are stars in the Milky Way, each connected to other neurons by billions of spines, which contain synapses that change continuously depending on what the neurons have recently learned. Capturing brain activity using imaging technology inevitably leads to oversimplifications, as sometimes evidenced by news reports that an investigator has found the location of something—love, guilt, decision making—in a single region of the brain.

And what are we even looking for when we search for evidence of “creativity” in the brain? Although we have a definition of creativity that many people accept—the ability to produce something that is novel or original and useful or adaptive—achieving that “something” is part of a complex process, one often depicted as an “aha” or “eureka” experience. This narrative is appealing—for example, “Newton developed the concept of gravity around 1666, when an apple fell on his head while he was meditating under an apple tree.” The truth is that by 1666, Newton had already spent many years teaching himself the mathematics of his time (Euclidean geometry, algebra, Cartesian coordinates) and inventing calculus so that he could measure planetary orbits and the area under a curve. He continued to work on his theory of gravity over the subsequent years, completing the effort only in 1687, when he published Philosophiœ Naturalis Principia Mathematica. In other words, Newton’s formulation of the concept of gravity took more than 20 years and included multiple components: preparation, incubation, inspiration—a version of the eureka experience—and production. Many forms of creativity, from writing a novel to discovering the structure of DNA, require this kind of ongoing, iterative process.

With functional magnetic resonance imaging, the best we can do is capture brain activity during brief moments in time while subjects are performing some task. For instance, observing brain activity while test subjects look at photographs of their relatives can help answer the question of which parts of the brain people use when they recognize familiar faces. Creativity, of course, cannot be distilled into a single mental process, and it cannot be captured in a snapshot—nor can people produce a creative insight or thought on demand. I spent many years thinking about how to design an imaging study that could identify the unique features of the creative brain.


 
The images on the left show the brain of a creative subject (top) and a matched control subject during a word‐association task. The images on the right show brain activation as the subjects alternate between an experimental task (word association) and a control task (reading a word). The line representing the creative subject’s brain activation moves smoothly up and down as the task changes, reflecting effective use of the association cortices in making connections. The control subject’s activation line looks ragged by comparison.

MOST OF THE HUMAN BRAIN'S high-level functions arise from the six layers of nerve cells and their dendrites embedded in its enormous surface area, called the cerebral cortex, which is compressed to a size small enough to be carried around on our shoulders through a process known as gyrification—essentially, producing lots of folds. Some regions of the brain are highly specialized, receiving sensory information from our eyes, ears, skin, mouth, or nose, or controlling our movements. We call these regions the primary visual, auditory, sensory, and motor cortices. They collect information from the world around us and execute our actions. But we would be helpless, and effectively nonhuman, if our brains consisted only of these regions.

In fact, the most extensively developed regions in the human brain are known as association cortices. These regions help us interpret and make use of the specialized information collected by the primary visual, auditory, sensory, and motor regions. For example, as you read these words on a page or a screen, they register as black lines on a white background in your primary visual cortex. If the process stopped at that point, you wouldn’t be reading at all. To read, your brain, through miraculously complex processes that scientists are still figuring out, needs to forward those black letters on to association-cortex regions such as the angular gyrus, so that meaning is attached to them; and then on to language-association regions in the temporal lobes, so that the words are connected not only to one another but also to their associated memories and given richer meanings. These associated memories and meanings constitute a “verbal lexicon,” which can be accessed for reading, speaking, listening, and writing. Each person’s lexicon is a bit different, even if the words themselves are the same, because each person has different associated memories and meanings. One difference between a great writer like Shakespeare and, say, the typical stockbroker is the size and richness of the verbal lexicon in his or her temporal association cortices, as well as the complexity of the cortices’ connections with other association regions in the frontal and parietal lobes.

A neuroimaging study I conducted in 1995 using positron-emission tomography, or PET, scanning turned out to be unexpectedly useful in advancing my own understanding of association cortices and their role in the creative process.

This PET study was designed to examine the brain’s different memory systems, which the great Canadian psychologist Endel Tulving identified. One system, episodic memory, is autobiographical—it consists of information linked to an individual’s personal experiences. It is called “episodic” because it consists of time-linked sequential information, such as the events that occurred on a person’s wedding day. My team and I compared this with another system, that of semantic memory, which is a repository of general information and is not personal or time-linked. In this study, we divided episodic memory into two subtypes. We examined focused episodic memory by asking subjects to recall a specific event that had occurred in the past and to describe it with their eyes closed. And we examined a condition that we called random episodic silent thought, or REST: we asked subjects to lie quietly with their eyes closed, to relax, and to think about whatever came to mind. In essence, they would be engaged in “free association,” letting their minds wander. The acronym REST was intentionally ironic; we suspected that the association regions of the brain would actually be wildly active during this state.

This suspicion was based on what we had learned about free association from the psychoanalytic approach to understanding the mind. In the hands of Freud and other psychoanalysts, free association—spontaneously saying whatever comes to mind without censorship—became a window into understanding unconscious processes. Based on my interviews with the creative subjects in my workshop study, and from additional conversations with artists, I knew that such unconscious processes are an important component of creativity. For example, Neil Simon told me: “I don’t write consciously—it is as if the muse sits on my shoulder” and “I slip into a state that is apart from reality.” (Examples from history suggest the same thing. Samuel Taylor Coleridge once described how he composed an entire 300-line poem about Kubla Khan while in an opiate-induced, dreamlike state, and began writing it down when he awoke; he said he then lost most of it when he got interrupted and called away on an errand—thus the finished poem he published was but a fragment of what originally came to him in his dreamlike state.)

Based on all this, I surmised that observing which parts of the brain are most active during free association would give us clues about the neural basis of creativity. And what did we find? Sure enough, the association cortices were wildly active during REST.

I realized that I obviously couldn’t capture the entire creative process—instead, I could home in on the parts of the brain that make creativity possible. Once I arrived at this idea, the design for the imaging studies was obvious: I needed to compare the brains of highly creative people with those of control subjects as they engaged in tasks that activated their association cortices.

For years, I had been asking myself what might be special or unique about the brains of the workshop writers I had studied. In my own version of a eureka moment, the answer finally came to me: creative people are better at recognizing relationships, making associations and connections, and seeing things in an original way—seeing things that others cannot see. To test this capacity, I needed to study the regions of the brain that go crazy when you let your thoughts wander. I needed to target the association cortices. In addition to REST, I could observe people performing simple tasks that are easy to do in an MRI scanner, such as word association, which would permit me to compare highly creative people—who have that “genie in the brain”—with the members of a control group matched by age and education and gender, people who have “ordinary creativity” and who have not achieved the levels of recognition that characterize highly creative people. I was ready to design Creativity Study II.

THIS TIME AROUND, I wanted to examine a more diverse sample of creativity, from the sciences as well as the arts. My motivations were partly selfish—I wanted the chance to discuss the creative process with people who might think and work differently, and I thought I could probably learn a lot by listening to just a few people from specific scientific fields. After all, each would be an individual jewel—a fascinating study on his or her own. Now that I’m about halfway through the study, I can say that this is exactly what has happened. My individual jewels so far include, among others, the filmmaker George Lucas, the mathematician and Fields Medalist William Thurston, the Pulitzer Prize–winning novelist Jane Smiley, and six Nobel laureates from the fields of chemistry, physics, and physiology or medicine. Because winners of major awards are typically older, and because I wanted to include some younger people, I’ve also recruited winners of the National Institutes of Health Pioneer Award and other prizes in the arts.

Apart from stating their names, I do not have permission to reveal individual information about my subjects. And because the study is ongoing (each subject can take as long as a year to recruit, making for slow progress), we do not yet have any definitive results—though we do have a good sense of the direction that things are taking. By studying the structural and functional characteristics of subjects’ brains in addition to their personal and family histories, we are learning an enormous amount about how creativity occurs in the brain, as well as whether these scientists and artists display the same personal or familial connections to mental illness that the subjects in my Iowa Writers’ Workshop study did.

To participate in the study, each subject spends three days in Iowa City, since it is important to conduct the research using the same MRI scanner. The subjects and I typically get to know each other over dinner at my home (and a bottle of Bordeaux from my cellar), and by prowling my 40-acre nature retreat in an all-terrain vehicle, observing whatever wildlife happens to be wandering around. Relaxing together and getting a sense of each other’s human side is helpful going into the day and a half of brain scans and challenging conversations that will follow.

We begin the actual study with an MRI scan, during which subjects perform three different tasks, in addition to REST: word association, picture association, and pattern recognition. Each experimental task alternates with a control task; during word association, for example, subjects are shown words on a screen and asked to either think of the first word that comes to mind (the experimental task) or silently repeat the word they see (the control task). Speaking disrupts the scanning process, so subjects silently indicate when they have completed a task by pressing a button on a keypad.

Playing word games inside a thumping, screeching hollow tube seems like a far cry from the kind of meandering, spontaneous discovery process that we tend to associate with creativity. It is, however, as close as one can come to a proxy for that experience, apart from REST. You cannot force creativity to happen—every creative person can attest to that. But the essence of creativity is making connections and solving puzzles. The design of these MRI tasks permits us to visualize what is happening in the creative brain when it’s doing those things.

As I hypothesized, the creative people have shown stronger activations in their association cortices during all four tasks than the controls have. (See the images on page 74.) This pattern has held true for both the artists and the scientists, suggesting that similar brain processes may underlie a broad spectrum of creative expression. Common stereotypes about “right brained” versus “left brained” people notwithstanding, this parallel makes sense. Many creative people are polymaths, people with broad interests in many fields—a common trait among my study subjects.

After the brain scans, I settle in with subjects for an in-depth interview. Preparing for these interviews can be fun (rewatching all of George Lucas’s films, for example, or reading Jane Smiley’s collected works) as well as challenging (toughing through mathematics papers by William Thurston). I begin by asking subjects about their life history—where they grew up, where they went to school, what activities they enjoyed. I ask about their parents—their education, occupation, and parenting style—and about how the family got along. I learn about brothers, sisters, and children, and get a sense for who else in a subject’s family is or has been creative and how creativity may have been nurtured at home. We talk about how the subjects managed the challenges of growing up, any early interests and hobbies (particularly those related to the creative activities they pursue as adults), dating patterns, life in college and graduate school, marriages, and child-rearing. I ask them to describe a typical day at work and to think through how they have achieved such a high level of creativity. (One thing I’ve learned from this line of questioning is that creative people work much harder than the average person—and usually that’s because they love their work.)

One of the most personal and sometimes painful parts of the interview is when I ask about mental illness in subjects’ families as well as in their own lives. They’ve told me about such childhood experiences as having a mother commit suicide or watching ugly outbreaks of violence between two alcoholic parents, and the pain and scars that these experiences have inflicted. (Two of the 13 creative subjects in my current study have lost a parent to suicide—a rate many times that of the general U.S. population.) Talking with those subjects who have suffered from a mental illness themselves, I hear about how it has affected their work and how they have learned to cope.


The author’s research on creativity includes in-depth neurological studies of “individual jewels,” including Pulitzer Prize–winning novelist Jane Smiley, shown here in 1991. (AP)

SO FAR, THIS STUDY—which has examined 13 creative geniuses and 13 controls—has borne out a link between mental illness and creativity similar to the one I found in my Writers’ Workshop study. The creative subjects and their relatives have a higher rate of mental illness than the controls and their relatives do (though not as high a rate as I found in the first study), with the frequency being fairly even across the artists and the scientists. The most-common diagnoses include bipolar disorder, depression, anxiety or panic disorder, and alcoholism. I’ve also found some evidence supporting my early hypothesis that exceptionally creative people are more likely than control subjects to have one or more first-degree relatives with schizophrenia. Interestingly, when the physician and researcher Jon L. Karlsson examined the relatives of everyone listed in Iceland’s version of Who’s Who in the 1940s and ’60s, he found that they had higher-than-average rates of schizophrenia. Leonard Heston, a former psychiatric colleague of mine at Iowa, conducted an influential study of the children of schizophrenic mothers raised from infancy by foster or adoptive parents, and found that more than 10 percent of these children developed schizophrenia, as compared with zero percent of a control group. This suggests a powerful genetic component to schizophrenia. Heston and I discussed whether some particularly creative people owe their gifts to a subclinical variant of schizophrenia that loosens their associative links sufficiently to enhance their creativity but not enough to make them mentally ill.

As in the first study, I’ve also found that creativity tends to run in families, and to take diverse forms. In this arena, nurture clearly plays a strong role. Half the subjects come from very high-achieving backgrounds, with at least one parent who has a doctoral degree. The majority grew up in an environment where learning and education were highly valued. This is how one person described his childhood:
Our family evenings—just everybody sitting around working. We’d all be in the same room, and [my mother] would be working on her papers, preparing her lesson plans, and my father had huge stacks of papers and journals … This was before laptops, and so it was all paper-based. And I’d be sitting there with my homework, and my sisters are reading. And we’d just spend a few hours every night for 10 to 15 years—that’s how it was. Just working together. No TV.
So why do these highly gifted people experience mental illness at a higher-than-average rate? Given that (as a group) their family members have higher rates than those that occur in the general population or in the matched comparison group, we must suspect that nature plays a role—that Francis Galton and others were right about the role of hereditary factors in people’s predisposition to both creativity and mental illness. We can only speculate about what those factors might be, but there are some clues in how these people describe themselves and their lifestyles.

One possible contributory factor is a personality style shared by many of my creative subjects. These subjects are adventuresome and exploratory. They take risks. Particularly in science, the best work tends to occur in new frontiers. (As a popular saying among scientists goes: “When you work at the cutting edge, you are likely to bleed.”) They have to confront doubt and rejection. And yet they have to persist in spite of that, because they believe strongly in the value of what they do. This can lead to psychic pain, which may manifest itself as depression or anxiety, or lead people to attempt to reduce their discomfort by turning to pain relievers such as alcohol.

I’ve been struck by how many of these people refer to their most creative ideas as “obvious.” Since these ideas are almost always the opposite of obvious to other people, creative luminaries can face doubt and resistance when advocating for them. As one artist told me, “The funny thing about [one’s own] talent is that you are blind to it. You just can’t see what it is when you have it … When you have talent and see things in a particular way, you are amazed that other people can’t see it.” Persisting in the face of doubt or rejection, for artists or for scientists, can be a lonely path—one that may also partially explain why some of these people experience mental illness.

ONE INTERESTING PARADOX that has emerged during conversations with subjects about their creative processes is that, though many of them suffer from mood and anxiety disorders, they associate their gifts with strong feelings of joy and excitement. “Doing good science is simply the most pleasurable thing anyone can do,” one scientist told me. “It is like having good sex. It excites you all over and makes you feel as if you are all-powerful and complete.” This is reminiscent of what creative geniuses throughout history have said. For instance, here’s Tchaikovsky, the composer, writing in the mid-19th century:
It would be vain to try to put into words that immeasurable sense of bliss which comes over me directly a new idea awakens in me and begins to assume a different form. I forget everything and behave like a madman. Everything within me starts pulsing and quivering; hardly have I begun the sketch ere one thought follows another.
Another of my subjects, a neuroscientist and an inventor, told me, “There is no greater joy that I have in my life than having an idea that’s a good idea. At that moment it pops into my head, it is so deeply satisfying and rewarding … My nucleus accumbens is probably going nuts when it happens.” (The nucleus accumbens, at the core of the brain’s reward system, is activated by pleasure, whether it comes from eating good food or receiving money or taking euphoria-inducing drugs.)

As for how these ideas emerge, almost all of my subjects confirmed that when eureka moments occur, they tend to be precipitated by long periods of preparation and incubation, and to strike when the mind is relaxed—during that state we called REST. “A lot of it happens when you are doing one thing and you’re not thinking about what your mind is doing,” one of the artists in my study told me. “I’m either watching television, I’m reading a book, and I make a connection … It may have nothing to do with what I am doing, but somehow or other you see something or hear something or do something, and it pops that connection together.”

Many subjects mentioned lighting on ideas while showering, driving, or exercising. One described a more unusual regimen involving an afternoon nap: “It’s during this nap that I get a lot of my work done. I find that when the ideas come to me, they come as I’m falling asleep, they come as I’m waking up, they come if I’m sitting in the tub. I don’t normally take baths … but sometimes I’ll just go in there and have a think.”

SOME OF THE OTHER most common findings my studies have suggested include:

Many creative people are autodidacts. They like to teach themselves, rather than be spoon-fed information or knowledge in standard educational settings. Famously, three Silicon Valley creative geniuses have been college dropouts: Bill Gates, Steve Jobs, and Mark Zuckerberg. Steve Jobs—for many, the archetype of the creative person—popularized the motto “Think different.” Because their thinking is different, my subjects often express the idea that standard ways of learning and teaching are not always helpful and may even be distracting, and that they prefer to learn on their own. Many of my subjects taught themselves to read before even starting school, and many have read widely throughout their lives. For example, in his article “On Proof and Progress in Mathematics,” Bill Thurston wrote:
My mathematical education was rather independent and idiosyncratic, where for a number of years I learned things on my own, developing personal mental models for how to think about mathematics. This has often been a big advantage for me in thinking about mathematics, because it’s easy to pick up later the standard mental models shared by groups of mathematicians.
This observation has important implications for the education of creatively gifted children. They need to be allowed and even encouraged to “think different.” (Several subjects described to me how they would get in trouble in school for pointing out when their teachers said things that they knew to be wrong, such as when a second-grade teacher explained to one of my subjects that light and sound are both waves and travel at the same speed. The teacher did not appreciate being corrected.)

Many creative people are polymaths, as historic geniuses including Michelangelo and Leonardo da Vinci were. George Lucas was awarded not only the National Medal of Arts in 2012 but also the National Medal of Technology in 2004. Lucas’s interests include anthropology, history, sociology, neuroscience, digital technology, architecture, and interior design. Another polymath, one of the scientists, described his love of literature:
I love words, and I love the rhythms and sounds of words … [As a young child] I very rapidly built up a huge storehouse of … Shakespearean sonnets, soliloquies, poems across the whole spectrum … When I got to college, I was open to many possible careers. I actually took a creative-writing course early. I strongly considered being a novelist or a writer or a poet, because I love words that much … [But for] the academics, it’s not so much about the beauty of the words. So I found that dissatisfying, and I took some biology courses, some quantum courses. I really clicked with biology. It seemed like a complex system that was tractable, beautiful, important. And so I chose biochemistry.
The arts and the sciences are seen as separate tracks, and students are encouraged to specialize in one or the other. If we wish to nurture creative students, this may be a serious error.

Creative people tend to be very persistent, even when confronted with skepticism or rejection. Asked what it takes to be a successful scientist, one replied:
Perseverance … In order to have that freedom to find things out, you have to have perseverance … The grant doesn’t get funded, and the next day you get up, and you put the next foot in front, and you keep putting your foot in front … I still take things personally. I don’t get a grant, and … I’m upset for days. And then I sit down and I write the grant again.
DO CREATIVE PEOPLE simply have more ideas, and therefore differ from average people only in a quantitative way, or are they also qualitatively different? One subject, a neuroscientist and an inventor, addressed this question in an interesting way, conceptualizing the matter in terms of kites and strings:
In the R&D business, we kind of lump people into two categories: inventors and engineers. The inventor is the kite kind of person. They have a zillion ideas and they come up with great first prototypes. But generally an inventor … is not a tidy person. He sees the big picture and … [is] constantly lashing something together that doesn’t really work. And then the engineers are the strings, the craftsmen [who pick out a good idea] and make it really practical. So, one is about a good idea, the other is about … making it practical.
Of course, having too many ideas can be dangerous. One subject, a scientist who happens to be both a kite and a string, described to me “a willingness to take an enormous risk with your whole heart and soul and mind on something where you know the impact—if it worked—would be utterly transformative.” The if here is significant. Part of what comes with seeing connections no one else sees is that not all of these connections actually exist. “Everybody has crazy things they want to try,” that same subject told me. “Part of creativity is picking the little bubbles that come up to your conscious mind, and picking which one to let grow and which one to give access to more of your mind, and then have that translate into action.”

In A Beautiful Mind, her biography of the mathematician John Nash, Sylvia Nasar describes a visit Nash received from a fellow mathematician while institutionalized at McLean Hospital. “How could you, a mathematician, a man devoted to reason and logical truth,” the colleague asked, “believe that extraterrestrials are sending you messages? How could you believe that you are being recruited by aliens from outer space to save the world?” To which Nash replied: “Because the ideas I had about supernatural beings came to me the same way that my mathematical ideas did. So I took them seriously.”

Some people see things others cannot, and they are right, and we call them creative geniuses. Some people see things others cannot, and they are wrong, and we call them mentally ill. And some people, like John Nash, are both.