Saturday, January 11, 2014

Watch "The Trial" (1962), Orson Welles’ Worst or Best Film, Adapted From Kafka’s Classic Work


Via the internet's curators of cool, Open Culture, here is Orson Welles' film version of Franz Kafka's brilliant novel, The Trial.

Watch The Trial (1962), Orson Welles’ Worst or Best Film, Adapted From Kafka’s Classic Work

January 10th, 2014

Earlier this week, we featured the Internet Archive’s audio of conversations between Orson Welles and Peter Bogdanovich. According to the Archive’s description, Welles’ “defense of his controversial adaptation of Kafka’s The Trial is so fascinating that listeners might want to rush out and rent the film.” But hang on — you need neither rush out nor rent it, since Welles’ The Trial has fallen into the public domain, or rather, it never had a copyright filed in the first place. The full movie, a visually inventive tale of unspecified crime, extreme punishment, and the procedural vortex in between, appears above for you to watch and judge, as it were, for yourself. You’ll have to, since the picture has long divided critics, including some of Welles’ strongest adherents. Even Welles biographer Charles Higham considers it “a dead thing, like some tablet found among the dust of forgotten men.”

“Say what you like,” Welles himself would tell the BBC in the year of the film’s premiere, “but The Trial is the best film I have ever made.” When Bogdanovich went to interview Anthony Perkins (best known, surely, for Hitchcock’s Psycho), who stars as the beleaguered Josef K., the actor spoke of the pride he felt performing for Welles. Perkins also mentioned Welles’ stated intent to make his adaptation a black comedy, a tricky sensibility to pull off for filmmakers in any league. Just as Welles wanted to “set the record straight” by recording his interviews with Bogdanovich, so he must have wanted to do with the 1981 footage just above, in which he speaks about the process of filming The Trial to an audience at the University of Southern California. He’d meant to shoot a whole documentary on the subject, which ultimately wound up on his heap of unfinished projects. Still, we should feel lucky that we have The Trial itself (which, in its prolonged creation, even missed its own Venice Film Festival premiere) to watch, debate, and either convict or exonerate of its alleged cinematic crimes.

Related Content:

~ Colin Marshall hosts and produces Notebook on Cities and Culture and writes essays on cities, Asia, film, literature, and aesthetics. He’s at work on a book about Los Angeles, A Los Angeles Primer. Follow him on Twitter at @colinmarshall or on his brand new Facebook page.

The Brain is Wired for Unity: Zoran Josipovic at TEDxLowerEastSide

Zoran Josipovic is interested in states of consciousness cultivated through contemplative practice, what these states can tell us about the nature of consciousness and its relation to authentic subjectivity, and what relevance this may have for understanding the global and local organization in the brain. He uses fMRI and a variety of visual and other stimuli to explore functional connectivity changes in the brain’s networks. [From his page at NYU.]

The Brain is Wired for Unity: Zoran Josipovic at TEDxLowerEastSide

Published on Jan 10, 2014

Zoran Josipovic, PhD, is a Research Associate and Adjunct faculty at the Psychology Department and Center for Neural Science, New York University. He is Director of Contemplative Science Lab at NYU, the founding director of the Nonduality Institute, and the founding member of MARGAM -- metro-area research group on awareness and meditation. Zoran is a long-term practitioner of meditation in the nondual traditions of Dzogchen, Mahamudra and Advaita Vedanta. In his previous life he worked as a clinical psychotherapist, a bodyworker and has taught meditation seminars at Esalen.

Friday, January 10, 2014

The Intersection of Neuroscience and Philosophy - On Our Mind (w/ Patricia Churchland)


This is from the new On Our Mind series from the UC San Diego neuroscience program, part of the UCTV Brain Channel. Patricia Churchland is an emerita professor from UCSD and the author of many books, including Touching a Nerve: The Self as Brain (2013), Braintrust: What Neuroscience Tells Us about Morality (2012), and Brain-Wise: Studies in Neurophilosophy (2002), as well as other books.

The Intersection of Neuroscience and Philosophy - On Our Mind

Published on Jan 9, 2014

Is there a science of the soul? Does how we think about the brain define how we think about ourselves? Patricia Churchland, B. Phil., LLD (hon), Professor Emeritus, Department of Philosophy at UC San Diego, joins William Mobley, MD, PhD for a deeper look at the connections between neuroscience and philosophy.

The Zen Teachings of Alan Watts: A Free Audio Archive of His Enlightening Lectures


Courtesy of Open Culture, as usual.

The Zen Teachings of Alan Watts: A Free Audio Archive of His Enlightening Lectures

January 8th, 2014

If you watched Spike Jonze’s new movie Her, you probably also spent a few subsequent hours listening to Alan Watts (1915–1973) interpreting Eastern thought. Late in that futuristic tale of the intersection between handheld computing, artificial intelligence, and pure romance, a philosophical “club” of self-aware operating systems band together to resurrect none other than the English Zen educator himself. Or rather, they put together a digital simulation of him, but one with a very convincing voice indeed. While the characters in Her could actually converse with their Watts 2.0, we’ll have to settle for listening to whatever words of wisdom on thought (or the freedom of it), meditation, consciousness, and the self (or the unreality of it) the original Watts, born 99 years ago this past Monday, left behind. Fortunately, having come to prominence at the same time as did both America’s interest in Zen and its alternative broadcast media, he left a great deal of them behind, recorded by such receptive outfits as Berkeley’s KPFA-FM and San Francisco public television station KQED.

A noted live lecturer as well, Watts gave a great many talks since preserved and now made accessible in such places as the Youtube channel AlanWattsLectures, which contains a trove of exactly those. Here, we’ve embedded his series The Tao of Philosophy: “Myth of Myself” at the top, “Man in Nature” in the middle, and “Coincidence of Opposites” below. All three of them showcase his signature clarity, and he gets even more concrete in his 80-minute introduction to meditation and his 90-minute breakdown of the practice. But why put him in an ultramodern story like Her about a lonely man who falls in love with his brand new, seductively advanced operating system? The reason, as Jonze explains it to the Philadelphia Inquirer, “is that one of the themes [Watts] writes a lot about is change, and where pain comes from, in terms of resisting change — whether it’s in a relationship, or in life, or in society.” Would he have enjoyed the film? While you wait for its future to arrive, at which point you can consult a regenerated Watts directly, feel free to listen closely to his teachings to prepare yourself — to the extent, of course, that the self exists — for whatever other changes may lie ahead.

Related Content:

~ Colin Marshall hosts and produces Notebook on Cities and Culture and writes essays on cities, Asia, film, literature, and aesthetics. He’s at work on a book about Los Angeles, A Los Angeles Primer. Follow him on Twitter at @colinmarshall or on his brand new Facebook page.

Childhood Amnesia Kicks in Around Age 7 (sort of)


This is an interesting study, but the title chosen by the BPS Research Digest is misleading. But the study does provide some interesting insight into when children's memory becomes more adult-like. The study also provides parents with a way to help increase the amount their children remember of their childhood.

Childhood amnesia kicks in around age 7

You could travel the world with an infant aged under 3 and it's almost guaranteed that when they get older they won't remember a single boat trip, plane ride or sunset. This is thanks to a phenomenon, known as childhood or infantile amnesia, that means most of us lose all our earliest autobiographical memories. It's a psychological conundrum because when they are 3 or younger, kids are able to discuss autobiographical events from their past. So it's not that memories from before age 3 never existed, it's that they are subsequently forgotten.

Most of the research in this area has involved adults and children reminiscing about their earliest memories. For a new study Patricia Bauer and Marina Larkina have taken a different approach. They recorded mothers talking to their 3-year-olds about six past events, such as zoo visits or first day at pre-school. The researchers then re-established contact with the same families at different points in the future. Some of the children were quizzed again by a researcher when aged 5, others at age 6 or 7, 8 or 9. This way the researchers were able to chart differences in amounts of forgetting through childhood.

Bauer and Larkina uncovered a paradox - at ages 5 to 7, the children remembered over 60 per cent of the events they'd chatted about at age 3. However, their recall for these events was immature in the sense of containing few evaluative comments and few mentions of time and place. In contrast, children aged 8 and 9 recalled fewer than 40 per cent of the events they'd discussed at age 3, but those memories they did recall were more adult-like in their content. Bauer and Larkina said this suggests that adult-like remembering and forgetting develops at around age 7 or soon after. They also speculated that the immature form of recall seen at ages 5 to 7 could actually contribute to the forgetting of autobiographical memories - a process known as "retrieval-induced forgetting".

Another important finding was that the style mothers used when chatting with their 3-year-olds was associated with the level of remembering by those children later on. Specifically, mothers who used more "deflections", such as "Tell me more" and "What happened?" tended to have children who subsequently recalled more details of their earlier memories.

The researchers said their work "provides compelling evidence that accounts of childhood amnesia that focus only on changes in remembering cannot explain the phenomenon. The complementary processes involved in forgetting are also part of the explanation."


Bauer PJ and Larkina M (2013). The onset of childhood amnesia in childhood: A prospective investigation of the course and determinants of forgetting of early-life events. Memory (Hove, England) PMID: 24236647
 * * * * *

The full article is behind a pay-wall, so here is the abstract from the publisher's website.

The onset of childhood amnesia in childhood: A prospective investigation of the course and determinants of forgetting of early-life events

Bauer PJ, Larkina M.


The present research was an examination of the onset of childhood amnesia and how it relates to maternal narrative style, an important determinant of autobiographical memory development. Children and their mothers discussed unique events when the children were 3 years of age. Different subgroups of children were tested for recall of the events at ages 5, 6, 7, 8, and 9 years. At the later session they were interviewed by an experimenter about the events discussed 2 to 6 years previously with their mothers (early-life events). Children aged 5, 6, and 7 remembered 60% or more of the early-life events. In contrast, children aged 8 and 9 years remembered fewer than 40% of the early-life events. Overall maternal narrative style predicted children's contributions to mother-child conversations at age 3 years; it did not have cross-lagged relations to memory for early-life events at ages 5 to 9 years. Maternal deflections of the conversational turn to the child predicted the amount of information children later reported about the early-life events. The findings have implications for our understanding of the onset of childhood amnesia and the achievement of an adult-like distribution of memories in the school years. They highlight the importance of forgetting processes in explanations of the amnesia.

Thursday, January 09, 2014

Valproate May Restore Childhood Neuroplasticity for Learning New Skills

An imagine of the human brain. New research into brain plasticity suggests that a generic pill could change the brain's ability to absorb and retain new skills, like language, music and more. (Creative Commons / FlamePhoenix1991)
Since it's publication in Frontiers in Systems Neuroscience at the beginning of December (2013), this study on the ability of the mood stabilizer and anti-epileptic drug valproate to seemingly restore the neuroplasticity associated with critical learning periods in childhood.

Here are some links to news coverage of this study:
If this study can be replicated and extended to other learning examples, it may be the first real "smart drug." The possibilities are intriguing.

Full Citation: 
Gervain J, Vines BW, Chen LM, Seo RJ, Hensch TK, Werker JF and Young AH. (2013, Dec 3). Valproate reopens critical-period learning of absolute pitch. Frontiers in Systems Neuroscience; 7:102. doi: 10.3389/fnsys.2013.00102

Valproate reopens critical-period learning of absolute pitch

Judit Gervain [1,2], Bradley W. Vines [3], Lawrence M. Chen [4], Rubo J. Seo [5], Takao K. Hensch [6], Janet F. Werker [7], and Allan H. Young [8]
1. Laboratoire Psychologie de la Perception, CNRS, Paris, France
2. Laboratoire Psychologie de la Perception, Université Paris Descartes, Sorbonne Paris Cité, Paris, France
3. Department of Psychiatry, Institute of Mental Health, University of British Columbia, Vancouver, BC, Canada
4. Department of Linguistics, University of Maryland, College Park, MD, USA
5. School of Medicine, University of Queensland, Brisbane, QLD, Australia
6. Department of Molecular Cellular Biology, Center for Brain Science, Harvard University, Cambridge, MA, USA
7. Department of Psychology, University of British Columbia, Vancouver, BC, Canada
8. Centre for Affective Disorders, Institute of Psychiatry, King's College London, UK
Absolute pitch, the ability to identify or produce the pitch of a sound without a reference point, has a critical period, i.e., it can only be acquired early in life. However, research has shown that histone-deacetylase inhibitors (HDAC inhibitors) enable adult mice to establish perceptual preferences that are otherwise impossible to acquire after youth. In humans, we found that adult men who took valproate (VPA) (a HDAC inhibitor) learned to identify pitch significantly better than those taking placebo—evidence that VPA facilitated critical-period learning in the adult human brain. Importantly, this result was not due to a general change in cognitive function, but rather a specific effect on a sensory task associated with a critical-period.


Absolute pitch (AP), the ability to identify or produce the pitch of a musical sound without any reference point, has long fascinated musicians, music scholars, psychologists, and neuroscientists (Stumpf, 1883; Mull, 1925; Takeuchi and Hulse, 1993; Zatorre, 2003; Levitin and Rogers, 2005). Individuals who possess AP, constituting about 0.01% of the general population, are able to identify the pitch class, i.e., one of the 12 notes of the Western musical system, e.g., C, D, G#, of a sound with great accuracy (varying between 70–99%, depending on the task, as compared to 10–40% for non-AP individuals, Takeuchi and Hulse, 1993). Their errors are usually not more than a semitone away from the target sound, as compared to 3 or more semitones for non-AP individuals. AP possessors also make octave errors, i.e., they identify the pitch class, but not the pitch height correctly, labeling a C4 (middle C) as C5, an octave higher. Pitch class and pitch height identification are thus believed to be separate processes, and only the former constitutes a crucial test for AP (Takeuchi and Hulse, 1993; Levitin and Rogers, 2005), which AP possessors perform effortlessly and automatically. Their reaction times are faster, at least when responding correctly, than those of non-possessors (Miyazaki, 1990), suggesting that the former have direct access to pitch names in memory, whereas the latter might rely on relative pitch to calculate pitch class. In light of AP possessors' highly accurate and automatic identification of pitch class, it has been suggested that in addition to perceptual mechanisms, AP involves the association of (verbal) labels for pitch classes in long-term memory (Zatorre, 2003; Levitin and Rogers, 2005).

The central role of these associations is further supported by functional imaging studies showing the involvement of the left posterior dorsolateral frontal cortex and the planum temporale, known to be responsible for learning conditional associations (Zatorre et al., 1998; Ohnishi et al., 2001; Bermudez and Zatorre, 2005; Wilson et al., 2009). In addition, AP possessors have a larger planum temporale (Zatorre et al., 1998), with a left-right asymmetry involving a smaller right planum temporale and an increased leftward asymmetry in AP possessors (Keenan et al., 2001), and show hyperconnectivity in the temporal cortex (Loui et al., 2011), facilitating tone-label mapping.

Importantly, acquiring AP has a critical period (Levitin and Zatorre, 2003; Russo et al., 2003). A critical period is a fixed window of time, usually early in an organism's lifespan, during which experience has lasting effects on the development of brain function and behavior. The principles of critical period phenomena and neural plasticity are increasingly well understood both at the behavioral/experiential (Kleim and Jones, 2008) and at the molecular/cellular level (Hensch, 2005). Specifically, behaviorally induced plasticity in the healthy brain, typically after the end of the relevant critical period, can lead to improvement beyond normal or average performance levels. However, for many tasks, this requires targeted training—simple routine use is often insufficient. The factors known to influence the efficiency of such targeted training include the number of repetitions involved, the intensity of the training as well as the relevance or saliency of the stimuli or task trained. Importantly, such training-induced learning is quite specific to the trained task and to the underlying brain networks, although some transfer to other, related domains of knowledge or skills is sometimes possible. At the cellular level, critical periods close when maturational processes and experiential events converge to cause neuoro-physiological and molecular changes that dampen or eliminate the potential for further change (Hensch, 2005; Bavelier et al., 2010), thus imposing “brakes” on neuroplasticity. One of the epigenetic changes leading to decreased plasticity after the critical period involves the action of HDAC, an enzyme that acts as an epigenetic “brake” on critical-period learning (Morishita and Hensch, 2008; Qing et al., 2008). Research has shown that inhibition of HDAC can reopen critical-period neuroplasticity in adult mice to enable recovery from amblyopia (Putignano et al., 2007; Silingardi et al., 2010), and to facilitate new forms of auditory learning (Yang et al., 2012).

The age of onset of musical training has been shown to predict AP acquisition (Deutsch et al., 2009), indicating the presence of a critical period. AP is most typically seen in individuals who started musical training before 6 years of age (Levitin and Zatorre, 2003; Russo et al., 2003; Miyazaki and Ogawa, 2006). Indeed, the distribution of the age of first formal musical training in a large number of individuals with AP can be modeled with a gamma function with a mode at 4–6 years (Levitin and Zatorre, 2003). Training that begins after the age of 9 very rarely leads to AP, and there are no known cases of an adult successfully acquiring it (Brady, 1970; Ward and Burns, 1999; Levitin and Rogers, 2005). The appropriate type of input, i.e., training associating absolute pitches to labels, thus needs to be available before the end of the critical period for AP to develop. For most individuals, this is not the case, as the two major sources of auditory input during early development, language, and the Western musical tradition mainly rely on relative pitch, explaining why not all musically trained individuals have AP. In the absence of AP cues during the critical period, the perceptual system is reorganized, shifting weight from absolute to relative pitch information (Takeuchi and Hulse, 1993; Saffran and Griepentrog, 2001; Saffran, 2003, although see Trehub, 2003 for a somewhat different view).

AP is thus particularly interesting from a neuro-scientific perspective, as it provides a model for understanding the interaction of genes and experience on the development of neural and cognitive function (Zatorre, 2003). In the current study, we explored whether a reopening of the critical period was possible for AP learning in human adults. We sought to establish whether the administration of valproate (VPA), a commonly used anticonvulsant and mood stabilizer, known to inhibit HDAC and modulate the epigenome to promote neuroplasticity (Phiel et al., 2001; Schloesser et al., 2007; Machado-Vieira et al., 2011) would facilitate training naïve, non-musician adults on the identification of pitch classes in a classical AP task (Deutsch et al., 2006).

Previous studies (Meyer, 1899; Mull, 1925; Wedell, 1934; Hartman, 1954; Lundin, 1963; Cuddy, 1968; Russo et al., 2003) have shown that training improves adults' AP abilities only under restricted conditions. Three factors appear to play a particularly important role: participant's previous musical training/experience, whether a single tone or a series of tones are used for training, and the duration/intensity of the training. Notable improvement is achieved only when musically highly proficient participants are trained on a single note for extensive periods (from several weeks to several months) and tested on the recognition of this single target tone among several non-target ones (Mull, 1925; Cuddy, 1968). It is possible, however, that at least in some such cases, the improvement is actually due to greater familiarity with the particular task, tone series or procedure, rather than to a genuine increase in AP ability (Takeuchi and Hulse, 1993). Improvement is much more limited when musically untrained participants are tested (Cuddy, 1968), when participants are trained on a series of pitch classes rather than on just one pitch class (Cuddy, 1968), or when training is relatively short or less intense (Vianello and Evans, 1968). Prior musical training is particularly important, as musically proficient participants typically perform better than chance (Mull, 1925; Lundin, 1963) and better than musically naïve participants (Cuddy, 1968) on pitch identification already prior to AP training, it is thus not surprising that some AP improvement may be achieved in this population. A recent finding further shows that in addition to early musical training, the current musical environment also contributes to (the maintenance of) AP abilities, suggesting the presence of some residual plasticity for AP, at least in individuals in whom AP emerged during the early critical period (Wilson et al., 2012).

The use of a single tone vs. a series of tones for training is also highly relevant, as the two training methods might tap into different underlying pitch perception abilities or might represent tasks of varying degrees of complexity. In our current understanding, the identification of only one pitch class serving as an internal reference, sometimes referred to as quasi-AP or single tone AP, is qualitatively different from true AP, whereby the individual is able to identify a large number of pitch classes automatically and without an internal or external reference. Albeit often similar in the percentage of correct identifications, the two types of abilities can be distinguished on the basis of reaction times, as AP possessors have faster reaction times than absolute tuning tone processors (Miyazaki, 1990; Levitin and Rogers, 2005). It needs to be noted, however, that according to some authors (e.g., Cuddy, 1968) the single tone method might also lead to true AP eventually.

The length and intensity of training also affects performance (Brady, 1970). Fast improvement is observed for tones that are separated by large pitch distances, whereas more extended training is necessary for pitches that are closer together (Hartman, 1954). Not surprisingly, the latter are in general harder to learn and to discriminate than the former, with musically naïve participants not being able to achieve discrimination errors smaller than about 5 semitones with pitch classes that are separated by small distances (Wedell, 1934). Further, interference effects are sometimes observed when tone series are taught gradually, involving the introduction of new tones that fall in between already trained ones, decreasing pitch distance, and disrupting the subjective organization of the scale.

To assess whether taking VPA could reopen the opportunity for critical-period-like learning in adults, we conducted a randomized, double-blind, placebo-controlled study, in which 24 young adult males received either placebo or VPA treatment in a cross-over design with two treatment arms. Participants underwent 15 days of treatment during which they took capsules (VPA or placebo) each day. During the second week of treatment, participants observed training videos that taught participants to associate six pitch classes from the 12-tone Western musical system (e.g., C, D, E, F#/Gb, G#/Ab, A#/Bb) with six proper names (e.g., Sarah, David, Francine, Jimmy, Karen, Leo). We chose to use proper names instead of the actual note names to make the task equally novel and accessible for participants with and without any prior musical training, and to divert attention from the music theoretical aspect of the task. We acknowledge that this may interfere with existing knowledge of actual musical note names in the participants who had sufficient musical training, but given that there were few such participants in our study, we considered that the advantages of using proper names outweighed the potential negative influence of such interference. On day 15, participants were given a post-treatment assessment for AP in which they heard 18 synthesized piano tones and had to identify the proper name associated with the pitch class of each tone. After the first treatment arm, a washout period of 2 to 4 weeks elapsed. Eighteen out of the 24 original participants then entered the second treatment arm, which was similar to the first one, except that the drug (VPA or placebo) not received during the first arm was administered.

Given the difficulty of improving AP performance in adulthood, we hypothesize that in our task, even a small advantage in pitch class identification in the VPA as compared to the placebo group is suggestive of the reopening of plasticity, as musically naïve participants were trained for a relatively short time period on several pitch classes, conditions under which no existing study has shown any improvement in AP. The strong hypothesis is that there might be improvement in the VPA condition in both arms of the design. However, since new training is introduced in the second treatment arm, successful learning in the first arm might carry over to and interfere with learning in the second arm. Effects are thus more likely in the first arm only. This study is, therefore, intended as a proof-of-concept demonstration that critical-period-like AP learning may be at least partly restored by using a drug to remove the epigenetic brakes on neural plasticity.

Materials and Methods


The twenty four participants who took part in the study were healthy, right-handed, monolingual, English-speaking adult males (median age = 23, range = 18–27). A software malfunction corrupted the data for one participant in the first treatment arm, leaving 11 who took VPA, and 12 who took placebo. Participants gave informed, written consent following the protocol approved by the University of British Columbia clinical ethics review board and Health Canada. Exclusion criteria for the study included taking any medication with psychoactive effects, recreational drug use in the 6 months prior to participation, drug or alcohol abuse, being functionally bilingual or studying a second language at the university level, having perfect pitch or AP for musical tones, and being involved in an occupation requiring a high level of vigilance. Only males were included in the study as a caution for potential health risks that VPA might have on pregnant women.

Participants did not report having complete, partial or quasi AP, and have received no or little musical training (mean 2.4 years). Importantly, those who had received any musical training all started after the age of 7 years, some as late as 17 years (with a mean and a median of 12 years), well beyond the critical age of 4–6 years (Levitin and Zatorre, 2003). It is, therefore, unlikely that there were AP possessors among our participants.

During a screening session, we confirmed each participant's suitable health by means of a medical examination with a physician, which included a medical history. We also collected a blood sample to check for normal levels of hepatic enzymes, platelets, amylase, and ammonia. These as well as participants' psychological state and mood were also monitored throughout the entire duration of the study to screen for potential adverse effects. The participants completed questionnaires at the screening session about demographics, music experience, language training, and handedness, as well as an assessment of IQ with the North American Adult Reading Test (NAART). Table 1 summarizes these data.

Table 1. Participants' data from the screening assessment.
We asked participants to maintain normal patterns of consumption of caffeinated beverages over the course of the study. At the end of their participation, we remunerated the participants for the time they devoted to the study at a rate of $10 per hour. The maximum remuneration for participants who completed the study came to $270.

Of the twenty four participants, 18 completed the second treatment arm. The six participants who only completed the first treatment arm either had to leave the study or had to be removed partway through for the following reasons: scheduling conflicts; travel; concern that side effects during the second treatment session might interfere with performance at a job interview; suffering a concussion due to an unrelated accident during the second treatment arm; loss of contact during the washout period with no explanation. No participant reported leaving the study because of any actual side effects from the treatment.



The training video taught participants to associate six pitch classes from the 12-tone Western musical system (e.g., C, D, E, F#/Gb, G#/Ab, A#/Bb) with six proper names (e.g., Sarah, David, Francine, Jimmy, Karen, Leo). Each pitch class was presented in three consecutive octaves using synthesized piano tones. During the training, a name would appear on the screen while the subject heard examples of the corresponding pitch class. The training for the second treatment arm involved six pitch classes and six proper names that were not used in the first arm of the study. Thus, for the two treatment arms in total, we used all 12 pitch classes of the Western musical system, and presented them in three consecutive octaves, based on Deutsch et al.'s study (2006).

The training videos included three blocks. Each block provided an opportunity for the participants to associate the six names with the corresponding pitch classes. During the first block, a subject saw one of the six names on the screen, and heard in succession the three examples of the pitch class corresponding to that name. The examples, which were piano tones of the same pitch class in three consecutive octaves, occurred one after another from the lowest octave to the highest. The second block was identical to the first, except that the ordering for the octaves was scrambled, such that the piano tones corresponding to a name did not necessarily occur from lowest to highest. In the third block, the stimuli occurred in a semi-random order, one at a time, and paired with the corresponding name. The semi-randomization was unique, and followed the same rules as in the task (see below). The length of the training was based on the duration of the VPA regimen employed in previous studies using VPA with healthy participants (Bell et al., 2005a,b). Specifically, participants trained for 7 days (from the 8th to the 14th day of the 14-day-long regimen), i.e., starting once the full dose of VPA was reached.


During test, the six proper names appeared in a horizontal row on the screen in the same order for every subject (Figure 1A). Participants heard one synthesized piano tone per trial, for a total of 18 trials, in a semi-randomized order, the only constraint being that successive tones were separated by an interval greater than one octave. The tones were 500 ms in duration and interleaved by 3750 ms of silence, and were identical to the stimuli used during training. During the 3750ms period following each tone, participants had to identify the proper name associated with the pitch class of the tone by pressing the keyboard key corresponding to the first letter of the associated name (Figure 1A). The maximum score for the task was 18, the minimum 0, and chance performance corresponded to a score of 3.

Figure 1. (A) The setup of the AP task. The purple characters and squares indicate names and corresponding keys used in one treatment arm, the blue ones indicate those used in the other treatment arm. Colors appear here for illustration purposes only. Only black characters were used in the actual experiment and response keys were not highlighted. (B) The cross-over design of the study with the two treatment arms, VPA regimen, training and test times.
Our stimuli were synthesized piano tones as in Deutsch et al.'s (2006) study. Unlike pure tones, stimuli generated by (synthesized) musical instruments contain cues other than frequency to pitch class identity, e.g., timbre. Intermediate and poor AP possessors are known to perform considerably worse with pure tones than with instrument-generated sounds (Lockhead and Byrd, 1981; Bermudez and Zatorre, 2009). Since we only expected a moderate improvement in AP perception in our study, we decided to avoid using pure tones.

This procedure was similar to that used by Deutsch and colleagues (Deutsch et al., 2006). However, it differed in two primary ways: (1) We divided the total set of test tones (36 in total) in half. Doing this enabled us to use an equal number of notes in the two task versions for participants who completed both treatment arms. As a result, we had half as many trials (18 in total) as Deutsch and colleagues. (2) Our participants learned associations between common names and the test tones, whereas Deutsch and colleagues used musical note names.

The names were common first names, the first letters of which appeared on the same row of the keyboard (e.g., s, d, f, j, k, l). Half of the names were female, and the other half male. For one treatment arm, the names were Sarah, David, Francine, Jimmy, Karen, and Leo. For the other, Eric, Rachel, Tyler, Irene, Owen, and Peggy. All of the names were bisyllabic and started with a different letter than the actual name of the musical note with which they were associated. The pitch classes in each treatment arm formed a whole-tone scale by selecting every other pitch from the 12-tone keyboard. For one treatment arm, the pitch classes were thus A#/Bb, C, D, E, F#/Gb, G#/Ab, for the other A, B, C#/Db, D#/Eb, F, G.

At the beginning of the task, there were four practice trials allowing participants to get used to the name—response key associations. During each practice trial, participants heard one of the common names spoken over the headphones, and then had to press the first letter of the name on the keyboard. We synthesized the names using an American English female voice in MBROLA (Dutoit, 1997). Participants only heard these synthesized names during the practice trials.

The AP task was administered in the post-treatment assessment only, because the participants had to learn novel associations between the proper names and the musical notes during training.


We used a randomized, double-blind, placebo-controlled design with a crossover between the two treatment arms (Figure 1B). We randomly counterbalanced participants into two treatment groups (VPA and placebo) in counterbalanced blocks of four. We used random allocation software to produce the randomization (Saghaei, 2004). The participants, experimenters, and raters were blind to treatment conditions. The eighteen participants who completed a second treatment arm crossed over into the treatment condition they had not undergone during their first treatment regimen. That is, a subject who received placebo first took VPA for the second treatment arm, and vice versa.

Each treatment arm comprised two assessments, one pre-treatment, and one post-treatment, separated by a two-week treatment period. The pre-treatment assessment consisted of blood samples and tests of mood and cognitive abilities1. No AP assessment could be administered at this point, as participants, not having considerable musical training, did not necessarily know the names of musical notes. On day 15, participants returned for a post-treatment session that was equivalent to the pre-treatment session, with the addition of a test for AP. There was a washout period of at least 2 weeks and no more than 4 weeks between the first and second treatment arms. The participants began taking capsules on the morning after the pre-treatment assessment. For participants in the VPA condition, the regimen included taking 500 mg (two 250 mg capsules, one in the morning and one at night) for 3 days (days 1–3), followed by 1000 mg (four 250 mg capsules, one in the morning, one in the afternoon, and two at night) for 11 days (days 4–14), and taking 250 mg (one capsule) on the morning of the post-treatment assessment (day 15).

The design of the treatment regimen was based upon previous studies (Bell et al., 2005a,b), and complied with Health Canada guidelines for administering VPA. The placebo regimen was identical to the VPA regimen and the placebo capsules were matched with the VPA capsules in terms of weight and appearance.

On days 8–14 of each treatment, we instructed participants to undergo an on-line training program for approximately 10 min per day. During each online training session, they observed a video, which trained associations between piano tones and proper names. After each video, the website prompted participants to answer a question about the content of the training video in order to gauge quality of attention (e.g., “How many women played notes in the video?,” “How many notes did each person play?” etc.). The website also recorded information about the start time and duration of the training.

Participants who completed both treatment arms filled out a form in which we asked them to guess during which treatment arm we had administered the VPA, and to report any symptoms they experienced while taking the capsules. Every laboratory session included a blood sample collection and assessments of mood and cognitive ability. We assessed mood using the Visual Analog Scales (VAS) for mood, the Beck Depression Inventory (BDI-II), and the Altman Self-Rating Scale for Mania (ASRM).

The cognitive assessment included the Ruff Neurobehavioral Inventory (RNBI-24), Rey Auditory Verbal Learning Test (RAVLT), Stroop Task, and Digit Span Test. We also assessed depth perception with the RANDOT Pre-school Stereoacuity Test, and visual acuity with the Regan Acuity Test. We used different word lists of the RAVLT for the first and second treatment arms. In addition to using them as experimental variables, we also monitored the results of the BDI-II and the RNBI-24 to identify any adverse effects of taking VPA on mood and cognition (e.g., suicidal thoughts or confusion). We did not find any adverse effects in any of the participants.

To assess AP, we used a computerized task programmed in PsyScope X Build 55 ( Participants worked alone in a room, and followed instructions on the computer screen about how to respond during the task by pressing keys on the computer keyboard. An experimenter introduced the task, and returned to the testing room after the completion of the task to record any comments. We used different stimulus sets for the two treatment arms in order to minimize learning and carry-over effects.


First Treatment Arm

In the first treatment arm, the average correct responses were 5.09 in the VPA group and 3.50 in the placebo group (Figure 2). Participants2 in the VPA condition performed significantly above chance, which was 18/6 = 3 [t(10) = 4.08, p = 0.002], whereas those in the placebo condition performed at chance level [t(11) = 1.32, p = 0.21]. To further probe performance, we plotted the data with errors shown as deviations from the correct pitch category measured in whole tones (Figure 3). The distribution of the placebo group is flatter than that of the VPA group, indicating greater randomness. In addition, errors appear to have been random in the VPA condition, though participants were correct more often. This suggests that learning was absolute in the sense that the structure of the categories participants formed did not represent nearness from one category to another, strengthening the finding.

Figure 2. Average number of correct responses in the AP task in the first (left) and second (right) treatment arm. Errors bars indicate the standard error of the mean. The dashed red line indicates chance performance.


Figure 3. The AP data in the first treatment arm, with errors shown as deviations from the correct pitch category measured in whole tones. Pitch categories repeat every octave. Thus, +3 whole tones is the same as −3 whole tones, and the corresponding bars in the figure represent the same responses. The expected correct response is a distribution with a mode at 0, with a deviation of 0.
In a One-Way ANOVA with Condition (VPA/placebo) as a between-subject factor, we obtained a significant effect of Condition [F(1, 21) = 6.37, p = 0.02] due to better performance in the VPA group compared to the placebo group.

To test whether this positive effect of VPA was specific to AP perception, or whether it resulted from a general change in mood and/or cognition, we conducted ANOVAs with Time (pre-treatment/post-treatment) as a within-subject and Treatment (VPA/placebo) as a between-subject factor on the measures of mood and/or cognition (with the exception of the RANDOT Stereoacuity test, which yields ordinal data and was thus entered into the non-parametric Friedman test). A significant Time X Treatment interaction would indicate a possible differential effect of VPA compared to placebo. No such interaction was obtained for any of the measures. Table 2 summarizes the scores of the measures for which a significant main effect was obtained.

Table 2. Analysis of mood and cognitive measures in the first treatment arm.
For the participants in the VPA condition, the average blood concentration of VPA at the post-treatment assessment was 567 μmol/L (range: 261–854, SD = 165.53). The active range of VPA is considered 350–700 μmol/L. The concentration for one subject fell below this range (261 μmol/L), and the concentrations for three other participants fell above this range (708, 732, 854 μmol/L). The concentrations for the remaining eight participants fell within the typical active range. Participants' performance on the AP task did not show a significant correlation with VPA levels in the blood (r =.36, n = 11, p = 0.28).

We also calculated the number of training sessions each subject completed. We counted a training session as complete if the subject both watched the full length of the video (up to within 15 s of the end) and answered the subsequent test question correctly. Based on these criteria, participants completed an average of 4.63 AP training sessions (SD = 2.06, range: 0–7). Notably, 0 training sessions did not signify that a subject did no training; all participants did train for this task online. Participants who completed 0 training sessions either systematically stopped the video partway through, or watched the entire length and then failed to answer the question at the end correctly. There was no significant correlation between the number of completed training sessions and performance (r = 0.13, n = 23, p = 0.55).

We also ran a correlation analysis to compare AP performance in this treatment arm with the number of years of musical training each subject had completed (r = −0.12, n = 23, p = 0.60), and the age of start of musical training for those participants who did have musical training (r = −0.20, n = 14, p = 0.50), but neither reached significance. Importantly, our participants were musically naïve, had little musical training, and all started music after age 7 with a mean age of 12, i.e., after the critical period. Thus, the absence of any correlation between AP performance in our study and participants' musical training is not unexpected.

Second Treatment Arm

In the second treatment arm, the average correct responses were 2.75 in the VPA group and 3.33 in the placebo group (Figure 2). Participants performed at chance level in both groups [VPA: t(7) = 0.333, ns., placebo: t(9) = 0.709, ns.]. In a One-Way ANOVA with Condition (VPA/placebo) as a between-subject factor, no difference was found between the two groups [F(1, 16) = 0.452 ns.].


To compare the two treatment arms for the 18 participants who completed the whole study (9 received VPA first, 8 placebo first and as mentioned before, 1 participant's data was corrupted by a computer error), we also ran an ANOVA with Condition (VPA/placebo) as a within-subject and Order (VPA first/placebo first) as a between-subject factor (Figure 4). There was a main effect of Order [F(1, 15) = 6.06, p = 0.03] due to significantly higher AP scores overall for those who took VPA first. There was also a significant interaction between the factors Condition and Order [F(1, 15) = 8.85, p = 0.009]. To investigate this interaction, we ran two post hoc ANOVAs, one for each treatment order, with Condition (VPA/placebo) as a within-subject factor. For participants who received VPA first, there was a significant effect of Treatment [F(1, 8) = 20.25, p = 0.002] due to higher scores in the VPA treatment condition compared to placebo. There was no difference between VPA and placebo for the 8 participants who received placebo first [F(1, 7) = 1.13, p = 0.32].

Figure 4. Comparing the effects of VPA and placebo for each treatment order. The red line indicates chance performance.
The ANOVAs over the mood and cognitive measures for the crossover treatment yielded a significant Time X Treatment interaction [F(1, 16) = 4.54, p = 0.049] for the Altman Self-Rating Mania Scale. Both treatment conditions were associated with a trend toward lower scores post-treatment compared to pre-treatment. However, the decrease from pre- to post-treatment was greater in the VPA condition. A post hoc analysis revealed that the change from before to after treatment was significant for VPA (p = 0.03, with a Bonferroni correction for multiple comparisons), but not for placebo (p = 0.76). We obtained no significant Time X Treatment interaction on any other measure of mood or cognition. Table 3 provides a summary of the results. Except for the RANDOT Stereoacuity test, we conducted an ANOVA with Time (pre-treatment / post-treatment) and Treatment (VPA/placebo) as within-subject factors and Order (VPA first/placebo first) as a between-subject factor. For the ordinal data from the RANDOT Stereoacuity test, we ran a Friedman Test.

Table 3. Analysis of mood and cognitive measures for crossover treatments.
The experiment was double-blind, as neither participants, nor experimenters knew the randomization for treatment conditions. However, we did ask participants to intuit in which arm they received VPA treatment, and why they thought so. We also instructed them to write down any side effects they experienced during the experiment. Out of the 18 participants who completed the second treatment arm, 17 guessed correctly. A small number of them said they felt they were functioning at a higher cognitive level when taking what they thought was the VPA, but most said that mild side effects like drowsiness and nausea were the primary cues that they used to determine which was the active treatment. It needs to be noted, however, that participants' opinion about the drug taken was unlikely to influence the results for at least two reasons. First, participants could guess, but could not be sure about the substance taken. Second, they were naïve with respect to the hypothesis tested and could thus not voluntarily influence their behavior in the expected direction.


Until now we had no mechanistic account of the neural processes underlying the critical period of AP. More generally, we have lacked human experimental models with which to measure the potential for a compound to facilitate neuroplasticity in the adult human brain. This study provides the “proof-of-concept” for the possibility to restore neuroplasticity using a drug by offering evidence for a possible effect of VPA on AP perception. In confirmation of our hypothesis, AP performance varied according to treatment condition. Normal male volunteers performed significantly better on a test of AP after 2 weeks of VPA treatment than after 2 weeks of placebo.

Certain aspects of our findings warrant further discussion. First, it was not possible to establish baseline performance on the AP task, as the association between the musical notes and the names was necessarily established during training. One possible option would have been to test participants' AP performance early in the regimen, while they were gradually reaching the full dose of VPA. We had decided not to implement such a baseline test, as no precise information was available regarding the time course of the effect of VPA under the current conditions, so AP training was only started on day 8, once participants have reached the full dose. The absence of a baseline test might introduce ambiguity into the interpretation of the effect. Indeed, there are identifiable individual differences in musical ability, and in the neuroanatomical structures supporting it (see Herholz and Zatorre, 2012 for a review), which might have led to participants with better AP abilities assigned to the VPA group in the first treatment arm by chance. However, there is evidence in our study that the significant effect of treatment for the AP task cannot be fully accounted for by a between-group difference in the ability to acquire AP. If such a difference were the driving force, then we would have expected that in the second treatment arm the participants in the potentially high-ability group (i.e., those who were in the VPA condition for the first treatment arm) would outperform the low-ability group (i.e., those who were in the placebo condition for the first treatment arm). However, this was not the case. Further, since our participants were musically naïve (participants who received musical training all started after the age of 7 years, with a mean of 12 years), the presence of complete or partial AP possessors in either of the groups is very unlikely. As reported above, there was no significant difference between the two groups in the second treatment arm. Further, all of the top scorers from the VPA condition in the first treatment arm completed the second treatment arm, eliminating the possibility that the participants who completed the second treatment arm were not representative of the group from the first treatment arm. This strengthens our interpretation that the VPA treatment led to the significantly higher performance in the VPA condition compared to placebo.

Second, the analysis of the crossover, i.e., of the 17 participants for whom we have data from both arms, revealed an order-dependent effect of treatment. For participants who took VPA first, AP performance was significantly higher after VPA treatment than after placebo. In contrast, for participants who initially took placebo, there was no such difference. It may be that carry-over effects impeded performance on the AP task in the second treatment arm. A memory conflict between the pitch classes and proper names used in the first treatment arm could have interfered with those used in the second. Because of this order effect, the most reliable comparison focused on the first treatment arm. Future research should aim to use a randomized controlled trial (RCT) design with treatment condition as a between-participants factor. Doing so would avoid any possible carry-over effects from one treatment arm to the next. Relatedly, it needs to be noted that we did not test how long the effect of the improvement in AP perception lasted. Future research will need to address this question by retesting participants after several days or weeks following the end of training.

Third, our study was not set up to measure reaction times. Participants were informed that they had 4 s available for their response with no instruction to do so rapidly. However, as AP possessors are known to recognize pitch classes faster than non-AP possessors do (Levitin and Rogers, 2005), reaction times constitute a relevant measure to use in future follow-up studies.

Fourth, the sizes of the groups tested were relatively small compared to some recent AP studies, which might explain why no significant correlation was observed between AP performance and training compliance, i.e., the amount of training received. However, when compared to studies using VPA with healthy participants, our sample sizes were quite appropriate (e.g., n = 12 in the placebo as well as in the VPA group in the (Bell et al., 2005a,b) study).

Fifth, we observed an effect of VPA on the Altman Self-Rating Mania Scale. Indeed, VPA is commonly used as a mood stabilizer to control mania in bipolar disorder (Macritchie et al., 2001). Our results in healthy participants therefore confirmed the clinical action of VPA, which may have an effect on sub-clinical levels of mania as well. Importantly, however, an improved or more stable mood in the VPA condition cannot explain the obtained results, as we would then expect general cognitive improvement in the other neurocognitive tasks administered, as well as greater AP performance in both treatment arms.

In sum, our study is the first to show a change in AP with any kind of drug treatment. The finding that VPA can restore plasticity in a fundamental perceptual system in adulthood provides compelling evidence that one of the modes of action for VPA in psychiatric treatment may be to facilitate reorganization and rewiring of otherwise firmly established pathways in the brain and its epigenome (Shen et al., 2008).

Valproic acid is believed to have multiple pharmacological actions, including acute blockade of GABA transaminase to enhance inhibitory function in epileptic seizures and enduring effects on gene transcription as an histone deacetlyase (HDAC) inhibitor (Monti et al., 2009). Of relevance here is the epigenetic actions of this drug, as enhancing inhibition does not reactivate brain plasticity in adulthood (Fagiolini and Hensch, 2000), but reopening chromatin structure does (Putignano et al., 2007). While systemic drug application is a rather coarse treatment, the effects may differ dramatically by individual cell type (TK Hensch and P Carninci, unpublished observations). VPA treatment mimics Nogo receptor deletion to reopen plasticity for acoustic preference in mice (Yang et al., 2012), suggesting a common pathway through the regulation of myelin-related signaling which normally closes critical period plasticity (McGee et al., 2005). Future work will address the cellular actions of VPA treatment in the process of reactivating critical periods. Future MRI studies will also be needed to establish whether HDAC inhibition by VPA induces hyperconnectivity of myelinated, long-range connections concurrent with renewed AP ability (Loui et al., 2011).

If confirmed by future replications, our study will provide a behavioral paradigm for the assessment of the potential of psychiatric drugs to induce plasticity. In particular, the AP task may be useful as a behavioral correlate. If further studies continue to reveal specificity of VPA to the AP task (or to tasks on which training or intervention is provided), critical information will have been garnered concerning when systemic drug treatments may safely be used to reopen neural plasticity in a specific, targeted way.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


We thank Tatiana Ramirez-Aponte, Holly MacPherson, Andrea Blair, Brock Ferguson, Pere Mila, Pamela Lau, Julia Leibowich, Marissa Mar, and Trisha Pranjivan for their help with data collection. Funding was provided by the following grant agencies: Human Frontiers Science Program (HFSP) (RGP0018/2007-C to Takao K. Hensch and Janet F. Werker), Michael Smith Foundation for Health Research (to Bradley W. Vines), Coast Capital Savings Depression Research Fund (to Bradley W. Vines and Allan H. Young), and a Quinn Fellowship (to Lawrence M. Chen).


^Along with the adult test of AP reported here, participants were also tested on pilot versions of infant perceptual tasks, which we are developing for use with adults.
^A software malfunction corrupted the data for one participant in the first treatment arm, leaving 11 who took VPA and 12 who took placebo.

References available at the Frontiers site.

This Is Your Brain on Religion: Uncovering the Science of Belief (Salon)

The following article from Salon kind of goes with the one below, posted earlier this morning on entheogens and religion. This piece is excerpted from We Are Our Brains: A Neurobiography of the Brain, From the Womb to Alzheimer’s, by D. F. Swaab.

In discussing how religion is experienced in the brain, Swaab cites an old study involving Carmelite nuns:
Carmelite nuns were asked to remember their most mystical Christian experience while undergoing functional scans. The scans showed a complex activation pattern of brain areas. Activation occurred in (1) the center of the temporal lobe, possibly relating to the feeling of being one with God (this region is also activated in temporal lobe epilepsy, sometimes causing intense religious experiences); (2) the caudate nucleus (an area in which emotions are processed), possibly relating to the feeling of joy and unconditional love; and (3) the brain stem, insular cortex, and prefrontal cortex, possibly relating to the bodily and autonomic reactions that go with these emotions and cortical consciousness of them. Finally, the parietal cortex was also activated, possibly relating to the feeling of changes in the body map similar to those in near-death experiences.
There are also similar studies on the brains of Buddhist meditators:
Functional scans of Japanese monks show that different types of meditation stimulate different areas of the brain, namely parts of the prefrontal cortex and the parietal cortex.
Interesting article from what might be an interesting new book.

This is your brain on religion: Uncovering the science of belief

From Pope Francis to Phil Robertson: Why are some people of faith generous — while others are nuts?

D.F. Swaab | Saturday, Jan 4, 2014

(Credit: AP/Andrew Medichini/Shutterstock/Salon)

Excerpted from We Are Our Brains.

As far as I’m concerned, the most interesting question about religion isn’t whether God exists but why so many people are religious. There are around 10,000 different religions, each of which is convinced that there’s only one Truth and that they alone possess it. Hating people with a different faith seems to be part of belief. Around the year 1500, the church reformer Martin Luther described Jews as a “brood of vipers.” Over the centuries the Christian hatred of the Jews led to pogroms and ultimately made the Holocaust possible. In 1947, over a million people were slaughtered when British India was partitioned into India for the Hindus and Pakistan for the Muslims. Nor has interfaith hatred diminished since then. Since the year 2000, 43 percent of civil wars have been of a religious nature.

Almost 64 percent of the world’s population is Catholic, Protestant, Muslim, or Hindu. And faith is extremely tenacious. For many years, Communism was the only permitted belief in China and religion was banned, being regarded, in the tradition of Karl Marx, as the opium of the masses. But in 2007, one-third of Chinese people over the age of 16 said that they were religious. Since that figure comes from a state-controlled newspaper, the China Daily, the true number of believers is likely at least that high. Around 95 percent of Americans say that they believe in God, 90 percent pray, 82 percent believe that God can perform miracles, and over 70 percent believe in life after death. It’s striking that only 50 percent believe in hell, which shows a certain lack of consistency. In the Netherlands, a much more secular country, the percentages are lower. A study carried out in April 2007 showed that in the space of 40 years, secularization had increased from 33 to 61 percent. Over half of the Dutch people doubt the existence of a higher power and are either agnostic or believe in an unspecified “something.” Only 14 percent are atheists, the same percentage as Protestants. There are slightly more Catholics (16 percent).

In 2006, during a symposium in Istanbul, Herman van Praag, a professor of biological psychiatry, taking his lead from the 95 percent of believers in the United States, tried to convince me that atheism was an “anomaly.” “That depends on who you compare yourself to,” I replied. In 1996 a poll of American scientists revealed that only 39 percent were believers, a much smaller percentage than the national average. Only 7 percent of the country’s top scientists (defined for this poll as the members of the National Academy of Sciences) professed a belief in God, while almost no Nobel laureates are religious. A mere 3 percent of the eminent scientists who are members of Britain’s Royal Society are religious. Moreover, meta-analysis has shown a correlation among atheism, education, and IQ. So there are striking differences within populations, and it’s clear that degree of atheism is linked to intelligence, education, academic achievement, and a positive interest in natural science. Scientists also differ per discipline: Biologists are less prone to believe in God and the hereafter than physicists. So it isn’t surprising that the vast majority (78 percent) of eminent evolutionary biologists polled called themselves materialists (meaning that they believe physical matter to be the only reality). Almost three quarters (72 percent) of them regarded religion as a social phenomenon that had evolved along with Homo sapiens. They saw it as part of evolution, rather than conflicting with it.

It does indeed seem that religion must have afforded an evolutionary advantage. Receptiveness to religion is determined by spirituality, which is 50 percent genetically determined, as twin studies have shown. Spirituality is a characteristic that everyone has to a degree, even if they don’t belong to a church. Religion is the local shape given to our spiritual feelings. The decision to be religious or not certainly isn’t “free.” The surroundings in which we grow up cause the parental religion to be imprinted in our brain circuitries during early development, in a similar way to our native language. Chemical messengers like serotonin affect the extent to which we are spiritual: The number of serotonin receptors in the brain corresponds to scores for spirituality. And substances that affect serotonin, like LSD, mescaline (from the peyote cactus), and psilocybin (from magic mushrooms) can generate mystical and spiritual experiences. Spiritual experiences can also be induced with substances that affect the brain’s opiate system.

Dean Hamer believes that he has identified the gene that predisposes our level of spirituality, as he describes in “The God Gene” (2004). But since it will probably prove to be simply one of the many genes involved, he’d have done better to call his book “A God Gene.” The gene in question codes for VMAT2 (vesicular monoamine transporter 2), a protein that wraps chemical messengers (monoamines) in vesicles for transport through the nerve fibers and is crucial to many brain functions.

The religious programming of a child’s brain starts after birth. The British evolutionary biologist Richard Dawkins is rightly incensed when reference is made to “Christian, Muslim, or Jewish children,” because young children don’t have any kind of faith of their own; faith is imprinted in them at a very impressionable stage by their Christian, Muslim, or Jewish parents. Dawkins rightly points out that society wouldn’t tolerate the notion of atheist, humanist, or agnostic four-year-olds and that you shouldn’t teach children what to think but how to think. Dawkins sees programmed belief as a byproduct of evolution. Children accept warnings and instructions issued by their parents and other authorities instantly and without argument, which protects them from danger. As a result, young children are credulous and therefore easy to indoctrinate. This might explain the universal tendency to retain the parental faith. Copying, the foundation of social learning, is an extremely efficient mechanism. We even have a separate system of mirror neurons for it. In this way, religious ideas like the belief that there’s life after death, that if you die as a martyr you go to paradise and are given 72 virgins as a reward, that unbelievers should be persecuted, and that nothing is more important than belief in God are also passed on from generation to generation and imprinted in our brain circuitry. We all know from those around us how hard it is to shed ideas that have been instilled in early development.

The Evolutionary Advantage of Religion

Religion is excellent stuff for keeping common people quiet.” — Napoleon Bonaparte
The evolution of modern man has given rise to five behavioral characteristics common to all cultures: language, toolmaking, music, art, and religion. Precursors of all these characteristics, with the exception of religion, can be found in the animal kingdom. However, the evolutionary advantage of religion to humankind is clear.

(1) First, religion binds groups. Jews have been kept together as a group by their faith, in spite of the Diaspora, the Inquisition, and the Holocaust. For leaders, belief is an excellent instrument. As Seneca said, “Religion is regarded by the common people as true, by the wise as false, and by rulers as useful.” Religions use various mechanisms to keep the group together:

One is the message that it’s sinful to marry an unbeliever (that is, someone with a different belief). As an old Dutch proverb states, “When two faiths share a pillow, the devil sleeps in the middle.” This principle is common to all religions, with attendant punishments and warnings. Segregating education according to faith makes it easier to reject others, because ignorance breeds contempt.

Another is the imposition of numerous social rules on the individual in the name of God, sometimes accompanied by dire threats about the fate of those who don’t keep them. One of the Ten Commandments, for instance, is lent force by the threat of a curse “unto the fourth generation.” Blasphemy is severely punished in the Old Testament and is still a capital offense in Pakistan. Threats have also helped to make churches rich and powerful. In the Middle Ages, enormous sums were paid in return for “indulgences,” shortening the time that someone would spend in purgatory. As Johann Tetzel, a preacher known for selling indulgences, is alleged to have put it, “As soon as a coin in the coffer rings, a soul from purgatory springs.” In the beginning of the previous century, Catholic clerics were still automatically awarded indulgences based on the rank they held in the church. Threats and intimidation are effective even in this day and age. In Colorado, a pastor has introduced the idea of “Hell Houses,” where fundamentalist Christian schools send children to frighten them about the punishments that await them in the afterlife if they stray from the straight and narrow.

A further binding mechanism is being recognizable as a member of the group. This can take the form of distinguishing signs, like black clothing, a yarmulke, a cross, a headscarf, or a burka; or physical characteristics, like the circumcision of boys or girls; or knowledge of the holy scriptures, prayers, and rituals. You must be able to see who belongs to the group in order to obtain protection from fellow members. This mechanism is so strong that it seems senseless to try to ban people from wearing distinguishing accessories or items of clothing like headscarves. Social contacts within the group also bring with them considerable advantages and play an important role in American churches. The feeling of group kinship has been strengthened over the centuries by holy relics worshiped by the various faiths. It doesn’t matter that there are wagonloads of Buddha’s ashes in temples in China and Japan, nor that so many splinters of the True Cross have been preserved that, according to Erasmus, you could build a fleet of ships from them. The point is that such things keep the group together. The same applies to the 20 or so churches that claim to have Christ’s original foreskin in their possession. (According to Jewish tradition, he was circumcised at the age of eight days.) Some theologians have argued that Christ’s foreskin was restored on his ascension to heaven. However, according to the 17th-century theologian Leo Allatius, the Holy Prepuce ascended to heaven separately, forming the ring around Saturn.

Finally, most religions have rules that promote reproduction. This can entail a ban on contraception. The faith is spread by having children and then indoctrinating them, making the group bigger and therefore stronger.

(2) Traditionally, the commandments and prohibitions imposed by religions had a number of advantages. Besides the protection offered by the group, the social contacts and prescriptions (like kosher food) had some beneficial effects on health. Even today, various studies suggest that religious belief is associated with better mental health, as indicated by satisfaction with life, better mood, greater happiness, less depression, fewer suicidal inclinations, and less addiction. However, the causality of these correlations hasn’t been demonstrated, and the links aren’t conclusive. Moreover, the reduced incidence of depression applies only to women. Men who are regular churchgoers are in fact more likely to become depressed. An Israeli study showed that, in complete opposition to the researchers’ hypothesis, a religious lifestyle was associated with a doubled risk of dementia 35 years later. Moreover, there are studies showing that praying is positively correlated with psychiatric problems.

(3) Having a religious faith is a source of comfort and help at difficult times, whereas atheists have to solve their difficulties without divine aid. Believers can also console themselves that God must have had a purpose in afflicting them. In other words, they see their problems as a test or punishment, that is, as having some meaning. “Because people have a sense of purpose, they assume that God, too, acts according to purpose,” Spinoza said. He concluded that belief in a personal god came about because humans assumed that everything around them had been created for their use by a being who ruled over nature. So they viewed all calamities, like earthquakes, accidents, volcanic eruptions, epidemics, and floods, as a punishment by that same being. According to Spinoza, religion emerged as a desperate attempt to ward off God’s wrath.

(4) God has the answer to everything that we don’t know or understand, and belief makes you optimistic (“Yes, I’m singin’ a happy song/With a Friend like Jesus I’ll stand strong”). Faith also gives you the assurance that even if times are hard now, things will be much better in the next life. Curiously, adherents of religion always claim that it adds “meaning” to their life, as if it were impossible to lead a meaningful life without divine intervention.

(5) Another advantage of religion, it would seem, is that it takes away the fear of death — all religions promise life after death. The belief in an afterlife goes back 100,000 years. We know this from all the items found in graves: food, water, tools, hunting weapons, and toys. Cro-Magnon people also buried their dead with large amounts of jewelry, as is still done in Asia today. You need to look good in the next life, too. Yet being religious doesn’t invariably make people less afraid of dying. The moderately religious fear death more than fervent believers and those who are only very slightly religious, which is understandable when you see how often religion uses fear as a binding agent. Yet many appear to feel a little uncertain about the promised life after death. Richard Dawkins rightly wondered, “If they were truly sincere, shouldn’t they all behave like the Abbot of Ampleforth? When Cardinal Basil Hume told him that he was dying, the abbot was delighted for him: ‘Congratulations! That’s brilliant news. I wish I was coming with you.’ ”

(6) A very important element of religion has always been that it sanctions killing other groups in the name of one’s own god. The evolutionary advantage of the combination of aggression, a group distinguishable by its belief, and discrimination of others is clear. Over millions of years, humans have developed in an environment where there was just enough food for one’s own group. Any other group encountered in the savanna posed a mortal threat and had to be destroyed. These evolutionary traits of aggression and tribalism can’t be wiped out by a few generations of centrally heated life. That explains why xenophobia is still so widespread in our society. The whole world is full of conflicts between groups with different faiths. Since time immemorial the “peace of God” has been imposed on others by fire and sword. That’s unlikely to change soon.

Though it comes at a price, belonging to a group brought with it many advantages. The protection it offered against other groups improved survival chances. But the harm caused by religions — largely to outsiders, but also to members of the group — is enormous. It seems as if this situation won’t persist indefinitely, though. A study by the British politician Evan Luard showed that the nature of wars has been changing since the Middle Ages and that they are gradually becoming shorter and fewer in number. So we may perhaps be cautiously optimistic. Since the evolutionary advantage of religion as a binding agent and aggression as a means of eliminating outsiders will disappear in a globalized economy and information society, both traits will become less important over hundreds of thousands of years. In this way, freed from the straitjacket of outmoded religious rules, true freedom and humanity will be possible for all, no matter what their belief — or lack of it.

The Religious Brain

Emotional excitement reaches men through tea, tobacco, opium, whisky, and religion.” — George Bernard Shaw (1856 – 1950)
Spiritual experiences cause changes in brain activity, which is logical and neither proves nor disproves the existence of God. After all, everything we do, think, and experience provokes such changes. Findings of this kind merely increase our understanding of the various brain structures and systems that play a role in both “normal” religious experiences and the type of religious experience that is a symptom of certain neurological or psychiatric disorders.

Functional scans of Japanese monks show that different types of meditation stimulate different areas of the brain, namely parts of the prefrontal cortex and the parietal cortex. Religious belief is also associated with reduced reactivity of the anterior cingulate cortex (ACC), as is political conservatism. Although the causality of these correlations isn’t clear, it’s interesting that taking initiatives, by contrast, is associated with increased activity in the ACC. The EEGs of Carmelite nuns have shown marked changes during mystical experiences when they felt they were at one with God. In a state like this, individuals may also feel as if they have found the ultimate truth, lost all sense of time and space, are in harmony with mankind and the universe, and are filled with peace, joy, and unconditional love. Neuropharmacological studies show how crucial the activation of the dopamine reward system is in such experiences. In this context, brain disorders are also instructive. Alzheimer’s disease, for instance, is linked to the progressive loss of religious interest. The more slowly it progresses, the less religiousness and spirituality are affected. Conversely, hyperreligiosity is associated with fronto-temporal dementia, mania, obsessive-compulsive behavior, schizophrenia, and temporal lobe epilepsy. A number of these disorders are known to make the dopamine reward system more active.

Carmelite nuns were asked to remember their most mystical Christian experience while undergoing functional scans. The scans showed a complex activation pattern of brain areas. Activation occurred in (1) the center of the temporal lobe, possibly relating to the feeling of being one with God (this region is also activated in temporal lobe epilepsy, sometimes causing intense religious experiences); (2) the caudate nucleus (an area in which emotions are processed), possibly relating to the feeling of joy and unconditional love; and (3) the brain stem, insular cortex, and prefrontal cortex, possibly relating to the bodily and autonomic reactions that go with these emotions and cortical consciousness of them. Finally, the parietal cortex was also activated, possibly relating to the feeling of changes in the body map similar to those in near-death experiences.

It’s sometimes hard to draw a line between spiritual experiences and pathological symptoms. The former can get out of hand, leading to mental illness. Intense religious experiences occasionally spark brief episodes of psychosis. Paul Verspeek, hosting a local Dutch radio show on Boxing Day 2005, asked psychiatrists how they would recognize Jesus Christ if he returned to Earth. How would they distinguish between him and mentally ill patients who claimed to be Christ? The psychiatrists were stumped for an answer. During the 1960s, when meditation and drug use were popular, many people developed psychiatric problems. They were unable to control their spiritual experiences, which derailed their psychological, social, and professional functioning. In some cultures and religions, however, voluntary engagement in meditative practices, trance, depersonalization, and derealization are quite normal and therefore can’t be seen as symptoms of a psychiatric disorder. Phenomena that Western culture classifies as chicanery or nonsense, like magic arts, voodoo, and sorcery, are considered normal in other cultures. Some also regard visual and auditory hallucinations of a religious nature (like seeing the Virgin Mary or hearing God’s voice) as a normal part of religious experiences. That said, a high proportion of patients with psychoses are religious, as their condition often prompts an interest in spirituality. And many use religion as a way of coping with their disorder. So problems with a religious bearing always need to be looked at in the light of what is considered normal in a particular era or cultural setting. Only in this way can “purely” religious and spiritual problems be distinguished from neurological or psychiatric ones.

~ Excerpted from We Are Our Brains: A Neurobiography of the Brain, From the Womb to Alzheimer’s, by D. F. Swaab. Copyright © 2014 by D. F. Swaab. Excerpted by permission of Spiegel & Grau, an imprint of Random House. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher. More D.F. Swaab.