Pages

Saturday, July 05, 2014

How Psilocybin Mushrooms Induce a Dream-Like State


Researchers in England have identified a possible mechanism for how psilocybin (the natural, hallucinogenic tryptamine found in various mushroom species) affects its actions in the brain.

Turns out that the state of consciousness while "tripping" is quite similar to the state of consciousness while dreaming. And now everyone who has ever used hallucinogens is releasing a collective, "Duh!"

Using brain imaging technology, the researchers identified "increased activity in the hippocampus and anterior cingulate cortex, areas involved in emotions and the formation of memories," areas considered to be more primitive in that they develop earlier. Meanwhile, there was "decreased activity was seen in 'less primitive' regions of the brain associated with self-control and higher thinking, such as the thalamus, posterior cingulate and medial prefrontal cortex."

How magic mushrooms induce a dream-like state




The chemical psilocybin causes the same brain activation as dreaming does <i>(Image: Ryan Wendler/Corbis)</i>
The chemical psilocybin causes the same brain activation as dreaming does  
(Image: Ryan Wendler/Corbis)

Anyone who has enjoyed a magical mystery tour thanks to the psychedelic powers of magic mushrooms knows the experience is surreally dreamlike. Now neuroscientists have uncovered a reason why: the active ingredient, psilocybin, induces changes in the brain that are eerily akin to what goes on when we're off in the land of nod.

For the first time, we have a physical representation of what taking magic mushrooms does to the brain, says Robin Carhart-Harris of Imperial College London, who was part of the team who carried out the research.

Researchers from Imperial and Goethe University in Frankfurt, Germany, injected 15 people with liquid psilocybin while they were lying in an fMRI scanner. The scans show the flow of blood through different regions of the brain, giving a measure of how active the different areas are.

The images taken while the volunteers were under the influence of the drug were compared with those taken when the same people were injected with an inert placebo. This revealed that during the psilocybin trip, there was increased activity in the hippocampus and anterior cingulate cortex, areas involved in emotions and the formation of memories. These are often referred to as primitive areas of the brain because they were some of the first parts to evolve.
 

Primal depths

At the same time, decreased activity was seen in "less primitive" regions of the brain associated with self-control and higher thinking, such as the thalamus, posterior cingulate and medial prefrontal cortex.

This activation pattern is similar to what is seen when someone is dreaming.

"This was neat because it fits the idea that psychedelics increase your emotional range," says Carhart-Harris.

Neuroscientific nuts and bolts aside, the findings could have genuine practical applications, says psychiatrist Adam Winstock at the Maudsley Hospital and Lewisham Drug and Alcohol Service in London. Psilocybin – along with other psychedelics – could be used therapeutically because of its capacity to probe deep into the primal corners of the brain.

"Dreaming appears to be an essential vehicle for unconscious emotional processing and learning," says Winstock. By using psilocybin to enter a dreamlike state, people could deal with stresses of trauma or depression, he says. "It could help suppress all the self-deceiving noise that impedes our ability to change and grow."

Next, the team wants to explore the potential use of magic mushrooms, LSD and other psychedelics to treat depression.

Journal reference: Human Brain Mapping, DOI: 10.1002/hbm.22562

* * * * *

Here is the abstract and introduction to the full article, which is freely available online.

Full Citation:
Tagliazucchi, E., Carhart-Harris, R., Leech, R., Nutt, D. and Chialvo, D. R. (2014, Jul 2). Enhanced repertoire of brain dynamical states during the psychedelic experience. Human Brain Mapping; ePub ahead of print. doi: 10.1002/hbm.22562

Enhanced repertoire of brain dynamical states during the psychedelic experience

Enzo Tagliazucchi, Robin Carhart-Harris, Robert Leech, David Nutt, and Dante R. Chialvo

Article first published online: 2 JUL 2014
Abstract

The study of rapid changes in brain dynamics and functional connectivity (FC) is of increasing interest in neuroimaging. Brain states departing from normal waking consciousness are expected to be accompanied by alterations in the aforementioned dynamics. In particular, the psychedelic experience produced by psilocybin (a substance found in “magic mushrooms”) is characterized by unconstrained cognition and profound alterations in the perception of time, space and selfhood. Considering the spontaneous and subjective manifestation of these effects, we hypothesize that neural correlates of the psychedelic experience can be found in the dynamics and variability of spontaneous brain activity fluctuations and connectivity, measurable with functional Magnetic Resonance Imaging (fMRI). Fifteen healthy subjects were scanned before, during and after intravenous infusion of psilocybin and an inert placebo. Blood-Oxygen Level Dependent (BOLD) temporal variability was assessed computing the variance and total spectral power, resulting in increased signal variability bilaterally in the hippocampi and anterior cingulate cortex. Changes in BOLD signal spectral behavior (including spectral scaling exponents) affected exclusively higher brain systems such as the default mode, executive control, and dorsal attention networks. A novel framework enabled us to track different connectivity states explored by the brain during rest. This approach revealed a wider repertoire of connectivity states post-psilocybin than during control conditions. Together, the present results provide a comprehensive account of the effects of psilocybin on dynamical behavior in the human brain at a macroscopic level and may have implications for our understanding of the unconstrained, hyper-associative quality of consciousness in the psychedelic state.

INTRODUCTION

Psilocybin (phophoryl-4-hydroxy-dimethyltryptamine) is the phosphorylated ester of the main psychoactive compound found in magic mushrooms. Pharmacologically related to the prototypical psychedelic LSD, psilocybin has a long history of ceremonial use via mushroom ingestion and, in modern times, psychedelics have been assessed as tools to enhance the psychotherapeutic process [Grob et al., 2011; Krebs et al., 2012; Moreno et al., 2006]. The subjective effects of psychedelics include (but are not limited to) unconstrained, hyperassociative cognition, distorted sensory perception (including synesthesia and visions of dynamic geometric patterns) and alterations in one's sense of self, time and space. There is recent preliminary evidence that psychedelics may be effective in the treatment of anxiety related to dying [Grob et al., 2011] and obsessive compulsive disorder [Moreno et al., 2006] and there are neurobiological reasons to consider their potential as antidepressants [Carhart-Harris et al., 2012, 2013]. Similar to ketamine (another novel candidate antidepressant) psychedelics may also mimic certain psychotic states such as the altered quality of consciousness that is sometimes seen in the onset-phase of a first psychotic episode [Carhart-Harris et al., 2014]. There is also evidence to consider similarities between the psychology and neurobiology of the psychedelic state and Rapid Eye Movement (REM) sleep [Carhart-Harris, 2007; Carhart-Harris and Nutt, 2014], the sleep stage associated with vivid dreaming [Aserinsky and Kleitman, 1953].

The potential therapeutic use of psychedelics, as well as their capacity to modulate the quality of conscious experience in a relatively unique and profound manner, emphasizes the importance of studying these drugs and how they act on the brain to produce their novel effects. One potentially powerful way to approach this problem is to exploit human neuroimaging to measure changes in brain activity during the induction of the psychedelic state. The neural correlates of the psychedelic experience induced by psilocybin have been recently assessed using Arterial Spin Labeling (ASL) and BOLD fMRI [Carhart-Harris et al., 2012]. This work found that psilocybin results in a reduction of both CBF and BOLD signal in major subcortical and cortical hub structures such as the thalamus, posterior cingulate (PCC) and medial prefrontal cortex (mPFC) and in decreased resting state functional connectivity (RSFC) between the normally highly coupled mPFC and PCC. Furthermore, our most recent study used magnetoencephalography (MEG) to more directly measure altered neural activity post-psilocybin and here we found decreased oscillatory power in the same cortical hub structures [Muthukumaraswamy et al., 2013, see also Carhart-Harris et al., 2014 for a review on this work).

These results establish that psilocybin markedly affects BOLD, CBF, RSFC, and oscillatory electrophysiological measures in strategically important brain structures, presumably involved in information integration and routing [Carhart-Harris et al., 2014; de Pasquale et al., 2012; Hagmann et al., 2008; Leech et al., 2012]. However, the effects of psilocybin on the variance of brain activity parameters across time has been relatively understudied and this line of enquiry may be particularly informative in terms of shedding new light on the mechanisms by which psychedelics elicit their characteristic psychological effects. Thus, the main objective of this article is to examine how psilocybin modulates the dynamics and temporal variability of resting state BOLD activity. Once regarded as physiological noise, a large body of research has now established that resting state fluctuations in brain activity have enormous neurophysiological and functional relevance [Fox and Raichle, 2007]. Spontaneous fluctuations self-organize into internally coherent spatiotemporal patterns of activity that reflect neural systems engaged during distinct cognitive states (termed “intrinsic” or “resting state networks”—RSNs) [Fox and Raichle, 2005; Raichle, 2011; Smith et al., 2009]. It has been suggested that the variety of spontaneous activity patterns that the brain enters during task-free conditions reflects the naturally itinerant and variegated quality of normal consciousness [Raichle, 2011]. However, spatio-temporal patterns of resting state activity are globally well preserved in states such as sleep [Boly et al., 2008, 2012; Brodbeck et al., 2012; Larson-Prior et al., 2009; Tagliazucchi et al., a,b,c] in which there is a reduced level of awareness—although very specific changes in connectivity occur across NREM sleep, allowing the decoding of the sleep stage from fMRI data [Tagliazucchi et al., 2012c; Tagliazucchi and Laufs, 2014]. Thus, if the subjective quality of consciousness is markedly different in deep sleep relative to the normal wakeful state (for example) yet FC measures remain largely preserved, this would suggest that these measures provide limited information about the biological mechanisms underlying different conscious states. Similarly, intra-RSN FC is decreased under psilocybin [Carhart-Harris et al., 2013] yet subjective reports of unconstrained or even “expanded” consciousness are common among users (see Carhart-Harris et al. [2014] for a discussion). Thus, the present analyses are motivated by the view that more sensitive and specific indices are required to develop our understanding of the neurobiology of conscious states, and that measures which factor in variance over time may be particularly informative.

A key feature of spontaneous brain activity is its dynamical nature. In analogy to other self-organized systems in nature, the brain has been described as a system residing in (or at least near to) a critical point or transition zone between states of order and disorder [Chialvo, 2010; Haimovici et al., 2013; Tagliazucchi and Chialvo, 2011; Tagliazucchi et al., 2012a]. In this critical zone, it is hypothesized that the brain can explore a maximal repertoire of its possible dynamical states, a feature which could confer obvious evolutionary advantages in terms of cognitive and behavioral flexibility. It has even been proposed that this cognitive flexibility and range may be a key property of adult human consciousness itself [Tononi, 2012]. An interesting research question therefore is whether changes in spontaneous brain activity produced by psilocybin are consistent with a displacement from this critical point—perhaps towards a more entropic or super-critical state (i.e. one closer to the extreme of disorder than normal waking consciousness) [Carhart-Harris et al., 2014]. Further motivating this hypothesis are subjective reports of hyper-associative cognition under psychedelics, indicative of unconstrained brain dynamics. Thus, in order to test this hypothesis, it makes conceptual sense to focus on variability in activity and FC parameters over time, instead of the default procedure of averaging these over a prolonged period. In what follows, we present empirical data that tests the hypothesis that brain activity becomes less ordered in the psychedelic state and that the repertoire of possible states is enhanced. After the relevant findings have been presented, we engage in a discussion to suggest possible strategies that may further characterize quantitatively where the “psychedelic brain” resides in state space relative to the dynamical position occupied by normal waking consciousness.

Watch John Coltrane Turn His Handwritten Poem Into a Sublime Musical Passage on "A Love Supreme"


Via Open Culture, of course. I had no idea that this piece (the 4th movement, "Psalm," on A Love Supreme) was John Coltrane's translation of his own poem into such a sublime piece of music. Amazing.

Watch John Coltrane Turn His Handwritten Poem Into a Sublime Musical Passage on A Love Supreme

in Music, Poetry | July 4th, 2014


On Vimeo, James Cary describes his video creation:
A few years ago, knowing I absolutely adored the John Coltrane album, “A Love Supreme” my wife gave me this incredible book by Ashley Kahn : “A Love Surpreme/The Story of John Coltrane’s Signature Album.” Reading the book, I discovered something remarkable: the fourth movement, Psalm, was actually John Coltrane playing the ‘words’ of the poem that was included in the original liner notes. Apparently he put the handwritten poem on the music stand in front of him, and ‘played’ it, as if it were music. I immediately played the movement while reading the poem, and the hair stood up on the back of my neck. It was one of the most inspirational and spiritual moments of my life.
I’ve seen some nice versions of this posted on the net, but wanted to make one using his exact handwriting. I also wanted to keep it simple. The music and John’s poem are what’s important. I hope you enjoy this. I hope this inspires you, no matter what ‘God’ you may believe in.
You can find a transcript of the poem below the jump. And while we have your attention, we’d also strongly encourage you to explore another post from our archive: John Coltrane’s Handwritten Outline for His Masterpiece A Love Supreme. Housed at the Smithsonian’s National Museum of American History, this handwritten document captures Coltrane’s original sketch for his 33-minute jazz masterpiece. It’s truly a treasure of American history.

via Ellen McGirt


I will do all I can to be worthy of Thee O Lord. 
It all has to do with it. 
Thank you God. 
Peace. 
There is none other. 
God is. It is so beautiful. 
Thank you God. God is all. 
Help us to resolve our fears and weaknesses. 
Thank you God. 
In You all things are possible. 
We know. God made us so. 
Keep your eye on God. 
God is. He always was. He always will be. 
No matter what…it is God. 
He is gracious and merciful. 
It is most important that I know Thee. 
Words, sounds, speech, men, memory, thoughts, 
fears and emotions – time – all related … 
all made from one … all made in one. 
Blessed be His name. 
Thought waves – heat waves-all vibrations – 
all paths lead to God. Thank you God. 

His way … it is so lovely … it is gracious. 
It is merciful – thank you God. 
One thought can produce millions of vibrations 
and they all go back to God … everything does. 
Thank you God. 
Have no fear … believe … thank you God. 
The universe has many wonders. God is all. His way … it is so wonderful. 
Thoughts – deeds – vibrations, etc. 
They all go back to God and He cleanses all. 
He is gracious and merciful…thank you God. 
Glory to God … God is so alive. 
God is. 
God loves. 
May I be acceptable in Thy sight. 
We are all one in His grace. 
The fact that we do exist is acknowledgement of Thee O Lord. 
Thank you God. 
God will wash away all our tears … 
He always has … 
He always will. 
Seek Him everyday. In all ways seek God everyday. 
Let us sing all songs to God 
To whom all praise is due … praise God. 
No road is an easy one, but they all 
go back to God. 
With all we share God. 
It is all with God. 
It is all with Thee. 
Obey the Lord. 
Blessed is He. 
We are from one thing … the will of God … thank you God. 
I have seen God – I have seen ungodly – 
none can be greater – none can compare to God. 
Thank you God. 
He will remake us … He always has and He always will. 
It is true – blessed be His name – thank you God. 
God breathes through us so completely … 
so gently we hardly feel it … yet, 
it is our everything. 
Thank you God. 
ELATION-ELEGANCE-EXALTATION
All from God. 
Thank you God. Amen. 

JOHN COLTRANE – December, 1964

Friday, July 04, 2014

Paul Bloom - Can Prejudice Ever Be a Good Thing?


In the TED Talk from the beginning of 2014, Paul Bloom talks about prejudice, based on his then new book, Just Babies: The Origins of Good and Evil (Nov. 2013).

Can prejudice ever be a good thing?

TEDSalon NY2014 · 16:23 · Filmed Jan 2014


We often think of bias and prejudice as rooted in ignorance. But as psychologist Paul Bloom seeks to show, prejudice is often natural, rational ... even moral. The key, says Bloom, is to understand how our own biases work — so we can take control when they go wrong.
* * * * *

Paul Bloom explores some of the most puzzling aspects of human nature, including pleasure, religion, and morality.

Why you should listen

In Paul Bloom’s last book, How Pleasure Works, he explores the often-mysterious enjoyment that people get out of experiences such as sex, food, art, and stories. His latest book, Just Babies, examines the nature and origins of good and evil. How do we decide what's fair and unfair? What is the relationship between emotion and rationality in our judgments of right and wrong? And how much of morality is present at birth? To answer these questions, he and his colleagues at Yale study how babies make moral decisions. (How do you present a moral quandary to a 6-month-old? Through simple, gamelike experiments that yield surprisingly adult-like results.)   

Paul Bloom is a passionate teacher of undergraduates, and his popular Introduction to Psychology 110 class has been released to the world through the Open Yale Courses program. He has recently completed a second MOOC, “Moralities of Everyday Life”, that introduced moral psychology to tens of thousands of students. And he also presents his research to a popular audience though articles in The Atlantic, The New Yorker, and The New York Times. Many of the projects he works on are student-initiated, and all of them, he notes, are "strongly interdisciplinary, bringing in theory and research from areas such as cognitive, social, and developmental psychology, evolutionary theory, linguistics, theology and philosophy."  

He says: "A growing body of evidence suggests that humans do have a rudimentary moral sense from the very start of life."

What others say

"Bloom is after something deeper than the mere stuff of feeling good. He analyzes how our minds have evolved certain cognitive tricks that help us negotiate the physical and social world." — New York Times

Brandon Keim - Evolution’s Contrarian Capacity for Creativity

From Nautilus, Facts So Romantic on Biology, this is an interesting article on creativity in evolution. The author begins with two small songbirds - small songbird known as the willow tit, closely related—Poecile montanus to Poecile atricapillus—to the black-capped chickadee.
To the naked eye, there’s not much to distinguish between them. Both are small, with black-and-white heads and gray-black wings, seed-cracking bills, and a gregarious manner. For a long time, they were even thought to be the same species. The only obvious difference, at least with the willow tit I saw, was a duskier olive underbelly coloration.
These two nearly identical birds are a nice jumping off place for a discussion of diversity in evolution.

Evolution’s Contrarian Capacity for Creativity

Posted By Brandon Keim on Jul 02, 2014


The easily confused willow tit and black-capped chickadee f.c.franklin via Flickr / Brandon Keim

ONE OF MY favorite pastimes while traveling is watching birds. Not rare birds, mind you, but common ones: local variations on universal themes of sparrow and chickadee, crow and mockingbird.

I enjoy them in the way that other people appreciate new food or architecture or customs, and it can be a strange habit to explain. You’re 3,000 miles from home, and less interested in a famous statue than the pigeon on its head?! Yet there’s something powerfully fascinating about how familiar essences take on slightly unfamiliar forms; an insight, even, into the miraculous essence of life, a force capable of resisting the universe’s otherwise inevitable tendency to come to rest.

Take, for example, a small songbird known as the willow tit, encountered on a recent trip to Finland and closely related—Poecile montanus to Poecile atricapillus—to the black-capped chickadee, the official bird of my home state of Maine. To the naked eye, there’s not much to distinguish between them. Both are small, with black-and-white heads and gray-black wings, seed-cracking bills, and a gregarious manner. For a long time, they were even thought to be the same species. The only obvious difference, at least with the willow tit I saw, was a duskier olive underbelly coloration.

Which raises a question, asked by Darwin and J. B. S. Haldane and generations of biologists since: Why? Why is a bird, similar in so many ways to another, different in this one? It’s a surprisingly tricky question.

Generally speaking, we tend to think of evolution in purposeful terms: There must be a reason for difference, an explanation grounded in the chances of passing on one’s supposedly selfish genes. Perhaps those olive feathers provide a better camouflage in amidst Finnish vegetation, or have come to signify virility in that part of the world. As evolutionary biologists Suzanne Gray and Jeffrey McKinnon describe in Trends in Ecology and Evolution review (pdf), differences in color are sometimes favored by natural selection—except, that is, when they’re not.

Often differences in color don’t have any function at all; they just happen to be. They emerge through what’s known as neutral evolution: mutations randomly spreading through populations. At times, this spread, this genetic drift, evenly distributes throughout the entire population, so the whole species changes together. Sometimes, though, the mutations confine themselves to different clusters within a species, like blobs of water cohering on a shower floor. 
One can imagine life evolving again and again, crashing on the rocks of time and circumstance, until finally it hit upon just the right mutation rate—one that eons later would produce organisms and species and ecosystems.
Given enough time and space, these processes can—at least theoretically, as experiments necessary for conclusive evidence would take millennia to run—generate new species. Such appears to be the case with greenish warblers living around the Tibetan plateau, who during the last 10,000 years have diverged into multiple, non-interbreeding populations, even though there are no geographic barriers separating them or evidence of local adaptations favored by natural selection. The raw material of life simply diversified. One became many, because that’s just what it does1.

Through this lens, evolution is an intrinsically generative force, with diversity proceeding ineluctably from the very existence of mutation. And here one can step back for a moment, go all meta, and ask: Where does mutation itself come from? How did evolution, and evolvability, evolve?

It’s a question on the bleeding edge of theoretical biology, and one studied by Joanna Masel at the University of Arizona. Her work suggests that, several billion years ago, when life consisted of self-replicating chemical arrangements, a certain amount of mutation was useful: After all, it made adaptation possible, if merely at the level of organized molecules persisting in gradients of heat and chemistry. There couldn’t be too much of it, though. If there were, the very mechanics of replication would break down.

Molecular biologist Irene Chen of the University of California, Santa Barbara, has further illuminated that delicate balance. Her work posits that, as an information storage system, DNA was less error-prone than RNA, its single-stranded molecular forerunner and the key material of the so-called RNA world thought to have preceded life as we now know it.

So, then, one can imagine, early in Earth’s history, life evolving again and again, crashing on the rocks of time and circumstance, until finally it hit upon just the right mutation rate—one that eons later would produce organisms and species and ecosystems that reproduce themselves and persist across time and chancellor.

That’s the remarkable thing about life: It continues. It keeps going and growing. Barring catastrophic asteroid strikes, or possibly the exponential population growth of a certain bipedal, big-brained hominid, life on Earth maintains complexity, actually increases it, even as the natural tendency of systems is to become simpler. Clocks unwind, suns run down, individual lives end, the Universe itself heads towards its own cold, motionless death; such is the Second Law of Thermodynamics, inviolable and inescapable.

Yet so long as Earth’s sun shines and genetic mutations arise, evolution may maintain its own thermodynamic law. Black-capped chickadees and willow tits diverge. Life pushes back. 

Footnote 
1. To be sure, the concept of neutrally driven biodiversity isn’t universally accepted. There may be subtle, intrinsic advantages to diversification. An example comes from the models of James O’Dwyer, a theoretical ecologist at the University of Illinois: Simply by virtue of their novelty, new species may be intrinsically less vulnerable to pathogens that afflicted their evolutionary parents.
So, then, perhaps willow tits and black-capped chickadees evolved slightly different feather patterns because they provided some immediate, direct benefit; or maybe it happened just because, for no reason at all, really; or maybe there was a subtle benefit intrinsic to the process of variation itself; or maybe it was some combination of all three, varying by time and place.
So, it’s complicated. But whatever the complications, all these processes share something very fundamental: the emergence of variety over time as life’s essential property.

Brandon Keim (@9brandon) is a freelance journalist specializing in science, environment, and culture.

Thursday, July 03, 2014

Neuroskeptic - Is It Time To Redraw the Map of the Brain?

A new study published online ahead of print publication in Brain: A Journal of Neurology suggests that our current maps of human brain-lesion deficits are not accurate and need to be reconsidered. Neuroskeptic offered a nice overview of the study, accessible to non-PhD readers. The article is also open access and available online.

I have included the summary from Neuroskeptic and the beginning of the full article - follow the links below to see the original article.

Is It Time To Redraw the Map of the Brain?

By Neuroskeptic | July 1, 2014 

A provocative and important paper just out claims to have identified a pervasive flaw in many attempts to map the function of the human brain.

University College London (UCL) neuroscientists Yee-Haur Mah and colleagues say that in the light of their findings, “current inferences about human brain function and deficits based on lesion mapping must be re-evaluated.

Lesion mapping is a fundamental tool of modern neuroscience. By observing the particular symptoms (deficits) people develop after suffering damage (lesions) to particular parts of the brain, we can work out what functions those various parts perform. If someone loses their hippocampus, say, and gets amnesia, you might infer that the function of the hippocampus is related to memory – as indeed it is.

However, there’s a problem with this approach, Mah et al say. Conventional lesion mapping treats each point in the brain (voxel) individually, as a possible correlate of a given deficit. This is called a mass univariate approach.

The problem is that the shape and location of brain lesions is not random – some areas are more likely to be affected than others, and the extent of the lesions varies in different places.

What this means is that the presence of damage in a certain voxel may be correlated with damage in other voxels. So damage in a voxel might be correlated with a given deficit, even though it has no role in causing the deficit, just because it tends to be damaged alongside another voxel that really is involved.

Mah et al call this problem ‘parasitic association’. In a large sample of diffusion-weighted MRI scans from 581 stroke patients, the authors show that the co-occurrence of damage across voxels leads to systematic, large biases in mass univariate deficit mapping.

The biases follow a complex geometry throughout the brain: as this lovely (but scary) image shows -


This shows the direction and magnitude of the error that would afflict a standard attempt to localize a hypothetical deficit that was truly associated with points throughout the brain. In some areas, the bias ‘points’ forward, so the deficit would be wrongly mapping as being further forward than it really was. In other places, it points in other direction.

The mean size of the expected error is 1.5 cm, but with a high degree of variability, so it is much worse in some areas.

Worse yet, Mah et al say that in cases where the same deficit can be caused by damage to two, non-adjacent areas, univariate lesion mapping might fail to pinpoint either of them. Instead, it could wrongly implicate a nearby, unrelated area.

The authors conclude that the only way to avoid this problem is by using multivariate statistics to explicitly model voxel interrelationships, e.g. a machine learning approach. This will require large datasets, but they caution that merely having a big sample, without multivariate statistics, would achieve nothing. They conclude on a somewhat downbeat note:

It is outside the scope if this study to determine the optimal multivariate approach: our focus here is on the evidence of the misleading picture the mass-univariate approach has created, and the need to review it wholesale. Taken together, our work demonstrates a way forward to place the study of focal brain lesions on a robust theoretical footing.
I’m not sure the outlook is quite so bleak. It’s a very nice paper, however, Mah et al’s dataset is purely based on stroke patients. Yet there are many other sources of brain lesions that are used for lesion mapping: tumours, infections, and head injuries to name a few.

These kinds of lesions probably throw up parasitic associations as well, however, they might be very different from the kind seen in strokes. This is because strokes, unlike other kinds of lesions, are always centered on blood vessels.

Mah et al note that the bias map they found is clustered around the major cerebral arteries and veins, but the obvious conclusion to draw from this is that it’s only applicable to strokes.

Whether other kinds of lesions produce substantial biases remains to be established. Until we know that, we shouldn’t be rushing to redraw any maps just yet.
Full Citation:
Mah, Y., Husain, M., Rees, G., & Nachev, P. (2014, Jun 28). Human brain lesion-deficit inference remapped. Brain; DOI: 10.1093/brain/awu164

Included here is the abstract and the introduction - follow the link in the title to download the PDF for yourself.

Human brain lesion-deficit inference remapped

Yee-Haur Mah, Masud Husain, Geraint Rees, and Parashkev Nachev

Author Affiliations
1. Institute of Neurology, UCL, London, WC1N 3BG, UK
2. Department of Clinical Neurology, University of Oxford, Oxford OX3 9DU, UK
3. Institute of Cognitive Neuroscience, UCL, London WC1N 3AR, UK
4. Wellcome Trust Centre for Neuroimaging, UCL, London WC1N 3BG, UK
Summary

Our knowledge of the anatomical organization of the human brain in health and disease draws heavily on the study of patients with focal brain lesions. Historically the first method of mapping brain function, it is still potentially the most powerful, establishing the necessity of any putative neural substrate for a given function or deficit. Great inferential power, however, carries a crucial vulnerability: without stronger alternatives any consistent error cannot be easily detected. A hitherto unexamined source of such error is the structure of the high-dimensional distribution of patterns of focal damage, especially in ischaemic injury—the commonest aetiology in lesion-deficit studies—where the anatomy is naturally shaped by the architecture of the vascular tree. This distribution is so complex that analysis of lesion data sets of conventional size cannot illuminate its structure, leaving us in the dark about the presence or absence of such error. To examine this crucial question we assembled the largest known set of focal brain lesions (n = 581), derived from unselected patients with acute ischaemic injury (mean age = 62.3 years, standard deviation = 17.8, male:female ratio = 0.547), visualized with diffusion-weighted magnetic resonance imaging, and processed with validated automated lesion segmentation routines. High-dimensional analysis of this data revealed a hidden bias within the multivariate patterns of damage that will consistently distort lesion-deficit maps, displacing inferred critical regions from their true locations, in a manner opaque to replication. Quantifying the size of this mislocalization demonstrates that past lesion-deficit relationships estimated with conventional inferential methodology are likely to be significantly displaced, by a magnitude dependent on the unknown underlying lesion-deficit relationship itself. Past studies therefore cannot be retrospectively corrected, except by new knowledge that would render them redundant. Positively, we show that novel machine learning techniques employing high-dimensional inference can nonetheless accurately converge on the true locus. We conclude that current inferences about human brain function and deficits based on lesion mapping must be re-evaluated with methodology that adequately captures the high-dimensional structure of lesion data.

Introduction

The study of patients with focal brain damage first revealed that the human brain has a functionally specialized architecture (Broca, 1861; Wernicke, 1874). Over the past century and a half such studies have been critical to identifying the distinctive neural substrates of language (Broca, 1861; Wernicke, 1874), memory (Scoville and Milner, 1957), emotion (Adolphs et al., 1995; Calder et al., 2000), perception (Goodale and Milner, 1992), decision-making (Bechara et al., 1994), attention (Egly et al., 1994; Mort et al., 2003), and intelligence (Gla¨ scher et al., 2009), casting light on the anatomical basis of deficits resulting from dysfunction of the brain. Though functional imaging has revolutionized the field of brain function mapping in the last 20 years, the necessity of a brain region for a putative function—arguably the strongest test—can only be established by showing a deficit when the function of the region is disrupted. Inactivating brain areas experimentally cannot easily be done in humans; the special cases of transcranial magnetic and direct current stimulation, though potentially powerful, are restricted temporally to days and anatomically to accessible regions of cortex.


The only comprehensive means of establishing functional necessity thus remains the study of patients with naturally occurring focal brain lesions (Rorden and Karnath, 2004). Though single patients may sometimes be suggestive, robust, population-level inferences about lesion-deficit relationships require aggregation of data from many patients (Karnath et al., 2004). Analogously to functional brain imaging, a statistical test comparing groups of patients with and without a deficit is iteratively applied point-bypoint to brain lesion images parcellated into many volume units (voxels) (Bates et al., 2003; Karnath et al., 2004). Voxels that cross the significance threshold are then taken to identify the functionally critical brain areas whose damage leads to the deficit.


Crucially, this ‘mass-univariate’ approach assumes that the resultant structure-deficit localization is not distorted by co-incidental damage of other, non-critical loci in each patient: in other words, that damage to each voxel is independent of damage to any other. This cannot be assumed in the human brain. Collaterally damaged but functionally irrelevant voxels might be associated with voxels critical for a deficit through an idiosyncrasy of the pathological process—the distribution of the vascular tree, for example—while having no relation to the function of interest. Such associations would lead to a distortion of the inferred anatomical locus.


Importantly, these ‘parasitic’ voxel-voxel associations can be detected only by examining the multivariate pattern of damage across the entire brain, and across the entire group. Studying large numbers of patients with the standard approach simply exacerbates the problem, because such consistent error will also consistently displace inferred critical brain regions from their true locations. Equally, replicating a study with the same number of patients will replicate the error too: observing the same result across different research groups and epochs offers no reassurance. Instead, the pattern of damage must be captured by a high dimensional multivariate distribution that describes how the presence or absence of damage at every voxel within each brain image is related to damage to all other voxels. The presence of ‘parasitic’ voxel-voxel associations would then manifest as a
hidden bias within the multivariate distribution, a complex correlation between individual patterns of damage apparent only in a high-dimensional space and opaque to inspection with simple univariate tools.

To illustrate the problem, consider the 2D synthetic example in Fig. 1, where damage to any part of area ‘A’ alone may disrupt a putative function of interest, but ‘B’ plays no role in this function of interest. If the lesions used to map the functional dependence on A follow a stereotyped pattern where damage to any part of A is systematically associated with collateral damage to the non-critical area B, both areas might appear to be significantly associated even if B is irrelevant to the function of interest. Crucially, if the pattern of the lesions within each patient is such (for reasons to do with factors unconnected to function) that the spatial variability of damage to B is less than to A, B will not only be erroneously determined to be critical but will have a higher significance value for such an association than A. The apparent locus of a lesion function deficit will therefore be displaced from A (the true locus) to B. Thus a hidden bias in the pattern of damage—hidden because it is apparent only when examining the pattern as a whole, in a multivariate way—distorts the spatial inference. 


Whether or not such a hidden bias exists has not been previously investigated. Here we analyse the largest reported series of focal brain lesions (n = 581) to show that it does exist, and that it compels a revision of previous lesion-deficit relationships within a wholly different inferential framework.

The Boundaries of the Knowable - An Examination of the BIG Questions


This is a 10-part series from The Open University, featuring Professor Russell Stannard exploring the BIG questions - What is consciousness? What is free will? What caused the Big Bang? What is time? Each "episode" is relatively short, between 7 and 14 minutes, so these are bite size examinations of the big questions.

The Boundaries of the Knowable


Professor Russell Stannard (Department of Physical Sciences, The Open University) presents a series with numerous open questions about our consciousness and the physical brain, free will, cosmology and life. Questions such as: What is consciousness? What is free will? What caused the Big Bang? What is time?

One day he says, we will reach the limits of science and it will grind to a halt. Science is the pursuit of knowledge to understand the physical world around us. One day we will reach the boundaries of the knowable. Perhaps we have already reached them. There are questions that we cannot answer and perhaps never will.

Is there free will or is it pre-determined? Are we predictable? If we strip down to the subatomic level, we can observe that even the electrons which orbit the nucleus are unpredictable. Surely this means that if the building blocks of life themselves are unpredictable, and then this will translate into everything else? Perhaps we are destined never to know the answer to this question.

Edwin Hubble discovered that the universe was expanding and now we know that 13.7 billion years ago there was a Big Bang which formed our galaxy, our solar system and our world. But what caused the Big Bang? 'Cause and effect' is a fundamental concept of science. But if there was no linear time or anything technically before the Big Bang, how could there be a 'cause'?

If everything had to be created randomly, the chances of the universe being able to sustain life would be effectively zero. So why is our universe so 'life friendly'? Was it created by an omnipotent being? Are there life forms out in other parallel universes? There could be many 'earth-like' planets out there. Evolution took billions of years on earth. There could be older galaxies out there, where life forms are more advanced than we are, due to the age of their universe.

The nature of time itself is fascinating. We are used to thinking in three dimensions: length, height and width. We also understand that time is linear. But what if time was in fact part of the other three dimensions? What if there were four dimensions? Astronauts experience the slowing down of time and distances in space. They experience the four dimensions. Science calls this 'spacetime'. But what is real? What we see here on earth or what the astronauts experience in space? These questions are still causing debate between scientists. These abstract questions are important, but will we ever fathom the riddles that they present?
Watch the full documentary now (playlist)

Your Sense of Humor Can Improve Your Health, Get You Pregnant, and Even Save Your Life

http://followthiscoach.files.wordpress.com/2012/07/laugh.jpg

I'm good with all of that except the pregnant part - that's not funny. This short article comes from The Atlantic and basically serves as a primer on the health benefits of humor and laughter - including a list of the studies Julie Beck used to write this article.

I am a firm believer in humor's healing power - the only "homework" I give to ALL of my clients is to laugh, somehow, some way, find something that makes you laugh, and laugh a lot.

Funny or Die

How your sense of humor can improve your health, get you pregnant, and even save your life

Julie Beck | May 21 2014

Rami Niemi

Laughter is the best medicine, or so the cliché goes. Actually, given the choice between laughter and, say, penicillin or chemotherapy, you’re probably better off choosing one of the latter. Still, a great deal of research shows that humor is extraordinarily therapeutic, mentally and physically.

Laughing in the face of tragedy seems to shield a person from its effects. A 2013 review of studies found that among elderly patients, laughter significantly alleviated the symptoms of depression [1]. Another study, published early this year, found that firefighters who used humor as a coping strategy were somewhat protected from PTSD [2]. Laughing also seems to ease more-quotidian anxieties. One group of researchers found that watching an episode of Friends (specifically, Season Five’s “The One Where Everybody Finds Out”) was as effective at improving a person’s mood as listening to music or exercising, and more effective than resting [3].

Laughter even seems to have a buffering effect against physical pain. A 2012 study found that subjects who were shown a funny video displayed higher pain thresholds than those who saw a serious documentary [4]. In another study, postsurgical patients requested less pain medication after watching a funny movie of their choosing [5].

Other literature identifies even more specific health benefits: laughing reduced arterial-wall stiffness (which is associated with cardiovascular disease) [6]. Women undergoing in vitro fertilization were 16 percent more likely to get pregnant when entertained by a clown dressed as a chef [7]. And a regular old clown improved lung function in patients with chronic obstructive pulmonary disease [8]. More generally, a mirthful life is likely to be a long one. A study of Norwegians found that having a sense of humor correlated with a high probability of surviving into retirement [9].

Unfortunately, there’s a not-so-funny footnote to all this: the people who are best at telling jokes tend to have more health problems than the people laughing at them. A study of Finnish police officers found that those who were seen as funniest smoked more, weighed more, and were at greater risk of cardiovascular disease than their peers [10]. Entertainers typically die earlier than other famous people [11], and comedians exhibit more “psychotic traits” than others [12]. So just as there’s research to back up the conventional wisdom on laughter’s curative powers, there also seems to be truth to the stereotype that funny people aren’t always having much fun. It might feel good to crack others up now and then, but apparently the audience gets the last laugh.



The Studies:

[1] Shaw, “Does Laughter Therapy Improve Symptoms of Depression Among the Elderly Population?” (PCOM Physician Assistant Studies dissertation, 2013)
[2] Sliter et al., “Is Humor the Best Medicine?” (Journal of Organizational Behavior, Feb. 2014)
[3] Szabo et al., “Experimental Comparison of the Psychological Benefits of Aerobic Exercise, Humor, and Music” (Humor, Sept. 2005)
[4] Dunbar et al., “Social Laughter Is Correlated With an Elevated Pain Threshold” (Proceedings of the Royal Society B, March 2012)
[5] Rotton and Shats, “Effects of State Humor, Expectancies, and Choice on Postsurgical Mood and Self-Medication” (Journal of Applied Social Psychology, Oct. 1996)
[6] Vlachopoulos et al., “Divergent Effects of Laughter and Mental Stress on Arterial Stiffness and Central Hemodynamics” (Psychosomatic Medicine, May 2009)
[7] Friedler et al., “The Effect of Medical Clowning on Pregnancy Rates After In Vitro Fertilization and Embryo Transfer” (Fertility and Sterility, May 2011)
[8] Brutsche et al., “Impact of Laughter on Air Trapping in Severe Chronic Obstructive Lung Disease” (International Journal of Chronic Obstructive Pulmonary Disease, March 2008)
[9] Svebak et al., “A 7-Year Prospective Study of Sense of Humor and Mortality in an Adult County Population” (The International Journal of Psychiatry in Medicine, June 2010)
[10] Kerkkänen et al., “Sense of Humor, Physical Health, and Well-Being at Work” (Humor, March 2004)
[11] Rotton, “Trait Humor and Longevity” (Health Psychology, July 1992)
[12] Ando et al., “Psychotic Traits in Comedians” (The British Journal of Psychiatry, May 2014)

Wednesday, July 02, 2014

3 Things Everyone Should Know Before Growing Up (NPR)

This is a nice post from NPR's 13.7 Cosmos and Culture blog - numbers one and three would have helped me immeasurably if someone had shared that wisdom. Number two I would ignored, but it has become a cornerstone of how I live my life. There is IQ (what the tests measure) and then there is intelligence, maybe better known as wisdom (knowledge + experience).

3 Things Everyone Should Know Before Growing Up

by TANIA LOMBROZO
June 30, 2014


We take it for granted that children should play. Why not adults? iStockphoto

With peak graduation season just behind us, we've all had the chance to hear and learn from commencement speeches — without even needing to attend a graduation. They're often full of useful advice for the future as seniors move on from high school and college. But what about the stuff you wish you'd been told long before graduation?

Here are just three of the many things I wish I'd known in high school, accumulated at various points along the way to becoming a professor of psychology.

1. People don't judge you as harshly as you think they do.

In a 2001 study, psychologists Kenneth Savitsky, Nicholas Epley and Thomas Gilovich asked college students to consider various social blunders: accidentally setting off the alarm at the library, being the sole guest at a party who failed to bring a gift or being spotted by classmates at the mall while carrying a shopping bag from an unfashionable store. Some students imagined experiencing these awkward moments themselves — let's call them the "offenders" — while others considered how they, or another observer, would respond watching someone else do so. We'll call them the "observers."

The researchers found that offenders thought they'd be judged much more harshly than the observers actually judged people for those offenses. In other words, observers were more charitable than offenders thought they would be.

In another study, students who attempted a difficult set of anagrams thought observers' perception of their intellectual ability would plummet. In fact, observers' opinions hardly shifted at all.

Why do we expect others to judge us more harshly than they do?

One of the main reasons seems to be our obsessive focus on ourselves and our own blunders. If you fail to bring a gift to a party, you might feel embarrassed and focus exclusively on that single bit of information about you. In contrast, other people will form an impression of you based on lots of different sources of information, including your nice smile and your witty banter. They'll also have plenty to keep them occupied besides you: enjoying a conversation, taking in the view, planning their evening or worrying about the impression that they are making. We don't loom nearly as large in other people's narratives as we do in our own.

Now, it isn't the case that others are always charitable. Sometimes they do judge us harshly. What the studies find is that others judge us less harshly than we think they will. But that should be enough to provide some solace. We can take it as an invitation to worry less about what others think of us and as a reminder to be generous in how we judge them.

2. You should think of intelligence as something you develop.

Is a person's intelligence a fixed quantity they're born with? Or is it something malleable, something that can change throughout the lifespan?

The answer is probably a bit of both. But a large body of research suggests you're better off thinking of intelligence as something that can grow — a skill you can develop — and not as something set in stone. Psychologist Carol Dweck and her colleagues have been studying implicit theories or "mindsets" about intelligence for decades, and they find that mindset really matters. People who have a "growth mindset" typically do better in school and beyond than those with a "fixed mindset."

One reason mindset is so important is because it affects how people respond to feedback.

Suppose George and Francine both do poorly on a math test. George has a growth mindset, so he thinks to himself: "I'd better do something to improve my mathematical ability. Next time I'll do more practice problems!" Francine has a fixed mindset, so she thinks to herself: "I guess I'm no good at math. Next time I won't bother with the honors course!" And when George and Francine are given the option of trying to solve a hard problem for extra credit, George will see it as an attractive invitation to grow his mathematical intelligence and Francine as an unwelcome opportunity to confirm she's no good at math.

Small differences in how George and Francine respond will, over time, generate big differences in the experiences they expose themselves to, their attitude toward math and the proficiency they ultimately achieve. (The gendered name choices here are not accidental: Girls often have a fixed mindset when it comes to mathematical ability; mindset probably accounts for some of the gender gap in girls' and boys' performance in mathematics in later school years.)

The good news is that mindsets are themselves malleable. Praising children's effort rather than their intelligence, for example, can help instill a growth mindset. And simply reading about the brain's plasticity might be enough to shift people's mindsets and generate beneficial effects.

That's enough to convince me that whether or not intelligence is malleable, our skills and achievements — the things we do with our intelligence — certainly are. Let's do what we can to "grow" them.

3. Playing isn't a waste of time.

We take it for granted that children can and should play. By adulthood, that outlook is expected to give way as we make time for more "mature" preoccupations. In her recent book Overwhelmed: Work, Love, and Play When No One Has the Time, Brigid Schulte takes a close look at how American adults spend their leisure time. She isn't too impressed: We don't have much of it (especially women and especially mothers), and we don't enjoy it as much as we could.

Young adults are somewhere in the transition: too old for "child's play" and not yet into adulthood. But the lesson from psychology is that there's a role for play at all ages, whether it's elaborate games of make-believe, rule-based games, unstructured summer playtime or forms of "higher culture," like art, music and literature. Playing is a way to learn about ourselves and about the world. Playing brings with it a host of emotional benefits.

Play is joyful in part because it's an end in itself. It's thus perhaps ironic (but fortuitous) that play is also a means to greater wellbeing and productivity, even outside the playroom. So make time for play; it's not something to outgrow.

Finally, if you're in search of more advice, check out NPR's collection of more than 300 commencement addresses, covering 1774 to the present.