Tuesday, March 11, 2014

Why Ray Kurzweil is Wrong: Computers Won’t Be Smarter Than Us Anytime Soon

Recently, I shared an article from George Dvorsky called "You Might Never Upload Your Brain Into a Computer," in which he outlined a series of reasons for his position:
1. Brain functions are not computable
2. We’ll never solve the hard problem of consciousness
3. We’ll never solve the binding problem
4. Panpsychism is true
5. Mind-body dualism is true
6. It would be unethical to develop
7. We can never be sure it works
8. Uploaded minds would be vulnerable to hacking and abuse
While I disagree with at least two of his points (I am not convinced panpsychism is true and I am VERY skeptical of mind-body dualism), I applaud the principle of it.

Likewise, this recent post from John Grohol at Psych Central's World of Psychology calls out futurist Ray Kurzweil on his claims around computer sentience, i.e., the singularity.

Why Ray Kurzweil is Wrong: Computers Won’t Be Smarter Than Us Anytime Soon

By John M. Grohol, Psy.D.



“When Kurzweil first started talking about the “singularity”, a conceit he borrowed from the science-fiction writer Vernor Vinge, he was dismissed as a fantasist. He has been saying for years that he believes that the Turing test – the moment at which a computer will exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human – will be passed in 2029.”

Sorry, but Ray Kurzweil is wrong. It’s easy to understand why computers are nowhere near close to surpassing humans… And here’s why.

Intelligence is one thing. But it’s probably the pinnacle of human narcissism to believe that we could design machines to understand us long before we even understood ourselves. Shakespeare, after all, said, “Know thyself.”

Yet, here it is squarely in 2014, and we still have only an inkling of how the human brain works. The very font of out intelligence and existence is contained in the brain — a simple human organ like the heart. Yet we don’t know how it works. All we have are theories.

Let me reiterate this: We don’t know how the human brain works.

How can anyone in their right mind say that, after a century of study into how the brain operates, we’re suddenly going to crack the code in the next 15 years?

And crack the code one must. Without understanding how the brain works, it’s ludicrous to say we could design a machine to replicate the brain’s near-instantaneous processing of hundreds of different sensory inputs from dozens of trajectories. That would be akin to saying we could design a space craft to travel to the moon, before designing — and understanding how to design — the computers that would take the craft there.

It’s a little backwards to think you could create a machine to replicate the human mind before you understand the basics of how the human mind makes so many connections, so easily.

Human intelligence, as any psychologist can tell you, is a complicated, complex thing. The standard tests for intelligence aren’t just paper-and-pencil knowledge quizzes. They involve the manipulation of objects in three-dimensional spaces (something most computers can’t do at all), understanding how objects fit within a larger system of objects, and other tests like this. It’s not just good vocabulary that makes a person smart. It’s a combination of skills, thought, knowledge, experience and visual-spatial skills. Most of which even the smartest computer today only has a rudimentary grasp of (especially without the help of human-created GPS systems).

Robots and computers are nowhere close to humanity in approaching its intelligence. They are probably around an ant in terms of their proximity today to “outsmarting” their makers. A driving car that relies on other computer systems — again, created by humans — is hardly an example of computer-based, innate intelligence. A computer than can answer trivia in a game show or play a game of chess isn’t really equivalent to the knowledge that even the most rudimentary blue-collar job holder holds. It’s a sideshow act. A distraction meant to demonstrate the very limited, singular focus computers have historically excelled at.

The fact that anyone even needs to point out that single-purpose computers are only good at the singular task they’ve been designed to do is ridiculous. A Google-driven car can’t beat a Jeopardy player. And the Jeopardy computer that won can’t tell you a thing about tomorrow’s weather forecast. Or how to solve a chess problem. Or what’s the best way to retrieve a failed space mission. Or when’s the best time to plant crops in the Mississippi delta. Or even the ability to turn a knob in the right direction in order to ensure the water turns off.

If you can design a computer to pretend to be a human in a very artificial, lab-created task of answering random dumb questions from a human — that’s not a computer that’s “smarter” than us. That’s a computer that’s incredibly dumb, yet was able to fool a stupid panel of judges judging from a criteria that’s all but disconnected from the real world.

And so that’s the primary reason Ray Kurzweil is wrong — we will not have any kind of sentient intelligence — in computers, robots, or anything else — in a mere 15 years. Until we understand the foundation of our own minds, it’s narcissistic (and a little bit naive) to believe we could design an artificial one that could function just as well as our own.

We are in the 1800s in terms of our understanding of our minds, and until we reach the 21st century, computers too will be in the 1800s of their ability to become sentient.


Read more:
Why robots will not be smarter than humans by 2029 in reply to 2029: the year when robots will have the power to outsmart their makers

Feeling Self-Critical? Try Mindfulness (Emily Nauman at Greater Good)

This is a brief but useful article from the Greater Good Science Center (UC Berkeley) on using mindfulness to deal with inner critic, although they frame it more in terms of self-esteem.

I would suggest using mindfulness directly in the inner critic by learning to identify its voice, its criticisms, and then be curious about the reasons the critic might be acting this way. What does it want? What are its needs? How is it trying to serve you?

And that last question is crucial - when we begin to understand that all of our "parts," including (and maybe especially) the inner critic, came into existence to help us in some way, then our relationship with them can shift from adversarial to cooperative.

Feeling Self-Critical? Try Mindfulness

New research shows that mindfulness may help us to stop comparing ourselves to other people

By Emily Nauman | March 9, 2014


Our Mindful Mondays series provides ongoing coverage of the exploding field of mindfulness research. Dan Archer

Many of us feel great about ourselves when we focus on how much success we’ve had in comparison to others. But what happens when we don’t succeed? Self-esteem sinks. 

New research shows that developing mindfulness skills may help us build secure self-esteem—that is, self-esteem that endures regardless of our success in comparison to those around us.

Christopher Pepping and his colleagues at Griffith University in Australia conducted two studies to demonstrate that mindfulness skills help enhance self-esteem.

In the first study, the researchers administered questionnaires to undergraduate students in an introductory psychology course to measure their mindfulness skills and their self-esteem. The researchers anticipated that four aspects of mindfulness would predict higher self-esteem:
  • Labeling internal experiences with words, which might prevent people from getting consumed by self-critical thoughts and emotions;
  • Bringing a non-judgmental attitude toward thoughts and emotions, which could help individuals have a neutral, accepting attitude toward the self;
  • Sustaining attention on the present moment, which could help people avoid becoming caught up in self-critical thoughts that relate to events from the past or future;
  • Letting thoughts and emotions enter and leave awareness without reacting to them.
The results, published in The Journal of Positive Psychology, support the researchers’ predictions: students with these mindfulness skills indeed had higher self-esteem. However, this study did not clarify whether mindfulness causes self-esteem, or whether those with mindfulness also had higher self-esteem because of some other factor.

In order to find out if mindfulness directly causes higher self-esteem, the researchers conducted a second study. They instructed half of the participants to complete a 15-minute mindfulness meditation that focused on the sensation of their breath. The other half of participants read a 15-minute story about Venus fly-trap plants. All of the participants completed questionnaires that assessed their level of self-esteem and mindfulness both before and after they completed the 15-minute task.

Consistent with the researchers’ predictions, those that participated in the mindfulness meditation had higher scores in mindfulness and in self-esteem after meditating, while there was no change in these dimensions for those that read the Venus fly-trap plant story.

Because the only difference between the two groups was whether or not they participated in a mindfulness exercise, these results suggest that mindfulness directly causes enhanced self-esteem.

The authors write that because the effects of the mindfulness exercise on self-esteem in this study were temporary, future research should examine if mindfulness interventions can lead to long-term changes in self-esteem.

However, these findings are promising. The authors write, “Mindfulness may be a useful way to address the underlying processes associated with low self-esteem, without temporarily bolstering positive views of oneself by focusing on achievement or other transient factors. In brief, mindfulness may assist individuals to experience a more secure form of high self-esteem.”

About The Author

Emily Nauman is a GGSC research assistant. She completed her undergraduate studies at Oberlin College with a double major in Psychology and French, and has previously worked as a research assistant in Oberlin’s Psycholinguistics lab and Boston University’s Eating Disorders Program.

Related Articles

Monday, March 10, 2014

John Hammond - The Search for Robert Johnson


Robert Johnson is one of legendary blues guitarists of the Mississippi Delta Blues tradition - and we know very little about his life and his death at the age of 27. Largely forgotten until the 1960s, Johnson's blues style and guitar skill influenced a generation of guitarists and musicians, including Keith Richards, Jeff Beck, Eric Clapton, Jimmy Page, and many others.

This documentary by John Hammond tries to trace the life of Johnson with the limited and contradictory information available, As an added bonus, I have included the Radiolab episode (from NPR) on the "crossroads" myth on Johnson's skills.

The Search for Robert Johnson

Uploaded on Mar 14, 2011


A very good bio-doc (from 1992) effort to untangle the life and myths of blues legend Robert Johnson. This is a challenging task, as not a lot is known about Johnson except through his music and through lore. There is speculation at times, but this is inevitable. It still uncovers a lot, from his rejection by his family (blues was the work of the devil) to the darkness of his lyrics and the mysterious circumstances surrounding his death.

I would have preferred the original music of Johnson, but narrator John Hammond does a very satisfactory job in his renditions. Relatively minor players "Honeyboy" Edwards and Johnny Shines give classic delta blues performances that stand out. Appearances by Eric Clapton and Keith Richards help to emphasize Johnson's lasting impact on blues and rock.

Johnson was never interviewed, and his performance was never captured on film. Beside his music, all that are left are oral accounts, peppered by exaggeration and myth. An accurate, objective bio may be impossible to achieve. But The Search for Robert Johnson comes about as close as might be expected, and has great entertainment value as well.
* * * * *

Crossroads

Monday, April 16, 2012



Crossroad at night (eioua/flickr/CC-BY-2.0)

In this short, we go looking for the devil, and find ourselves tangled in a web of details surrounding one of the most haunting figures in music--a legendary guitarist whose shadowy life spawned a legend so powerful, it's still being repeated...even by fans who don't believe a word of it.

For years and years, Jad's been fascinated by the myth of what happened to Robert Johnson at the crossroads in Clarksdale, Mississippi. The story goes like this: back in the 1920s, Robert Johnson wanted to play the blues. But he really sucked. He sucked so much, that everyone who heard him told him to get lost. So he did. He disappeared for a little while, and when he came back, he was different. His music was startling--and musicians who'd laughed at him before now wanted to know how he did it. And according to the now-famous legend, Johnson had a simple answer: he went out to the crossroads just before midnight, and when the devil offered to tune his guitar in exchange for his soul, he took the deal.

Producer Pat Walters bravely escorts Jad to the scene of the supposed crime, in the middle of the night in the Mississippi Delta, to try to track down some shred of truth to all this. Not because they really thought something spooky would actually happen, but because deep down, there's a part of this story that--as much as the facts fall apart--still feels kind of true.

To help us get close to the real human behind the tall tales, we talk to Robert Johnson experts Tom Graves, Elijah Wald, David Evans, and Robert “Mack” McCormick. And we hear, posthumously, from Ledell Johnson...a man of no relation to Robert, who unintentionally helped the world fall for a blues-imbued ghost story.

Read more:

Daniel Wolf Savin - Before There Were Stars (via Nautilus)

This article from Nautilus Magazine offers some insight into the two unlikely heroes of the formation of the stars and planets that made our universe the way it is - dark matter and molecular hydrogen.


Before There Were Stars

The unlikely heroes that made starlight possible.

By Daniel Wolf Savin | Illustration by Jon Han | February 27, 2014

http://static.nautil.us/2665_e727fa59ddefcefb5d39501167623132.png

THE UNIVERSE is the grandest merger story that there is. Complete with mysterious origins, forces of light and darkness, and chemistry complex enough to make the chemical conglomerate BASF blush, the trip from the first moments after the Big Bang to the formation of the first stars is a story of coming together at length scales spanning many orders of magnitude. To piece together this story, scientists have turned to the skies, but also to the laboratory to simulate some of the most extreme environments in the history our universe. The resulting narrative is full of surprises. Not least among these, is how nearly it didn’t happen—and wouldn’t have, without the roles played by some unlikely heroes. Two of the most important, at least when it comes to the formation of stars, which produced the heavier elements necessary for life to emerge, are a bit surprising: dark matter and molecular hydrogen. Details aside, here is their story.


Dark Matter


The Big Bang created matter through processes we still do not fully understand. Most of it—around 84 percent by mass—was a form of matter that does not interact with or emit light. Called dark matter, it appears to interact only gravitationally. The remaining 16 percent, dubbed baryonic or ordinary matter, makes up the everyday universe that we call home. Ordinary matter interacts not only gravitationally but also electromagnetically, by emitting and absorbing photons (sometimes called radiation by the cognoscenti and known as light in the vernacular).

As the universe expanded and cooled, some of the energy from the Big Bang converted into ordinary matter: electrons, neutrons, and protons (the latter are equivalent to ionized hydrogen atoms). Today, protons and neutrons comfortably rest together in the nuclei of atoms. But in the seconds after the Big Bang, any protons and neutrons that fused to form heavier atomic nuclei were rapidly blown apart by high-energy photons called gamma rays. The residual thermal radiation field of the Big Bang provided plenty of those. It was too hot to cook. But things got better a few seconds later, when the radiation temperature dropped to about a trillion degrees Kelvin—still quite a bit hotter than the 300 Kelvin room temperature to which we are accustomed, but a world of difference for matter in the early universe.

Heavier nuclei could now survive the gamma-ray bombardment. Primordial nucleosynthesis kicked in, enabling nuclear forces to bind protons and neutrons together, until the expansion of the universe made it too cold for these fusion reactions to continue. In these 20 minutes, the universe was populated with atoms. The resulting elemental composition of the universe weighed in at roughly 76 percent hydrogen, 24 percent helium, and trace amounts of lithium—all ionized, since it was too hot for electrons to stably orbit these nuclei. And that was it, until the first stars formed and began to forge all the other elements of the periodic table.

Before these stars could form, however, newly-formed hydrogen and helium atoms needed to gather together to make dense clouds. These clouds would have been produced when slightly denser regions of the universe gravitationally attracted matter from their surroundings. The question is, was the early universe clumpy enough for this to have happened?

To answer the question, we can look to the modern-day night sky. In it, we see a faint glow of microwave radiation that has an even fainter pattern in it. This so-called cosmic microwave background structure dates back to 377,000 years after the Big Bang, a mere fraction of the universe’s current age of 13.8 billion years, and analogous to less than a day in the 81-year life expectancy for a woman living today in the United States.

At that time, the universe had just cooled to about 3,000 Kelvin. Free electrons started to be captured into orbit around protons, forming neutral hydrogen atoms. Photons from the flash of the Big Bang, whose progress had been impeded by their scattering off of unbound electrons, could now finally stream throughout the cosmos, essentially free. These photons continue to permeate the universe today, at a frigid temperature of only 2.7 Kelvin, and constitute the cosmic microwave background that we have measured using an array of ground-based, balloon-born, and satellite telescopes.

These sky maps suggested something surprising: The intensity of the residual heat from the Big Bang made the early universe too smooth for gas clouds to form.

Enter dark matter. Because it does not interact directly with light, it was unaffected by the same radiation that smoothed out ordinary matter. Therefore it was left with a relatively high degree of clumpiness. It, rather than regular matter, initiated the formation of the stars and galaxies that make up the modern structure of the universe. Regions of space with an above-average density of dark matter gravitationally attracted matter from regions with lower densities. Halos of dark matter formed and merged with other halos, bringing ordinary matter along for the ride.


Molecular Hydrogen


Once the universe went neutral, gas began to form into clouds. As ordinary matter accelerated into the gravitational wells of dark matter, gravitational potential energy converted into kinetic energy, creating a hot gas of fast-moving particles with high kinetic energies embedded within halos of dark matter. Starting from temperatures around 1,000 Kelvin, these gas clouds eventually gave birth to the first stars when the universe was roughly half a billion years old (about four years into the lifespan of the typical U.S. woman).

For a star to form, a gas cloud needs to reach a certain density; but if its constituent molecules are too hot, zipping around in every direction, this density may be unreachable. The first step toward making star-forming clouds was for gas atoms to slow down by radiating their kinetic energy out of the cloud and into the larger universe, which by this time had cooled to below 100 Kelvin.

But they can’t cool themselves: As atoms collide like billiard balls, they exchange kinetic energy. But the total kinetic energy of the gas remains unchanged. They needed a catalyst to cool off.

This catalyst was molecular hydrogen (two hydrogen atoms bound together by sharing their electrons). Hot particles colliding with this dumbbell-shaped molecule transferred some of their own energy to the molecule, causing it to rotate. Eventually these excited hydrogen molecules would relax back to their lowest-energy (or ground) state by emitting a photon that escaped from the cloud, carrying the energy out into the universe.

To make molecular hydrogen, the atomic gas clouds needed to do some chemistry. It might be surprising to hear that any chemistry was going on at all, given that the entire universe had just three elements. The most sophisticated chemical models of early gas clouds, however, include nearly 500 possible reactions. Fortunately, to understand molecular hydrogen formation, we need concern ourselves with only two key processes.

Chemists have named the first reaction associative detachment, a name fit for a psychiatric condition out of the DSM-V for which a clinician might prescribe some primordial lithium. Initially, most of the hydrogen in a gas cloud was in neutral atomic form, with the positive charge of a single proton cancelled out by the negative charge of a single orbiting electron. However, a small fraction of its atoms captured two electrons, creating a negatively charged hydrogen ion. These neutral hydrogen atoms and charged hydrogen ions “associated” with each other, causing the extra electron to detach and leaving behind neutral molecular hydrogen. In chemical notation, this can be represented as H + H- → H2 + e-. Associative detachment converted only about 0.01 percent of atomic hydrogen to molecules, but that small fraction allowed the clouds to begin to cool and become denser.

When the cloud had become sufficiently cool and dense, a second chemical reaction began. It is called three-body association, and written as H + H + H → H2 + H. This ménage-à-trois begins with three separate hydrogen atoms, and ends with two of them coupled and the third one left out in the cold. Three-body association converted essentially all of the cloud’s remaining atomic hydrogen into molecular hydrogen. Once all of the hydrogen was fully molecular, the cloud cooled to the point where its gas could condense enough to form a star.


Stars


From the formation of a dense cloud to the ignition of fusion at the heart of a star is a process whose complexity far exceeds what came before it. In fact, even the most sophisticated computer simulations available have yet to reach the point where the object becomes stellar in size, and fusion begins. Simulating most of the 200-million-year process is relatively easy, requiring only about 12 hours using high speed, parallel processing computer power. The problem lies in the final 10,000 years. As the density of the gas goes up, the structure of the cloud changes more and more rapidly. So, whereas for early times one needs only to calculate how the cloud changes every 100,000 years or so, for the final 10,000 years one must calculate the change every few days. This dramatic increase in the required number of calculation translates into more than a year of non-stop computer time on today’s fastest machines. Running simulations for the full range of possible starting conditions in these primordial clouds exceeds what can be achieved in a human lifetime. As a result, we still do not know the mass distribution for the first generation of stars. Since the mass of a star determines what elements it forges in its core, this hinders our ability to follow the pathway by which the universe began to synthesize the elements needed for life. Those of us who cannot wait to know the answer are counting on yet another hero: Moore’s Law.


~ Daniel Wolf Savin is a contrabass-playing astrophysicist at Columbia University

Kahneman's System 1 & System 2 Thinking - A Primer

From Big Think, this is a brief primer on Daniel Kahneman's System 1 and System 2 thinking, as outlined in his excellent book, Thinking, Fast and Slow.

The author of this piece feels it necessary to claim a couple of limitations to the system 1 and system 2 model, namely that it's "light on evolution." However, this is a misdirection.

I'm sure Kahneman would be first to proclaim that system 1 developed far earlier in our evolution than did system 2. Even today much of our daily survival is taken care of by the system 1 brain, while the system 2 brain is engaged probably around 10% of the time (at best) for most people.

Kahneman's Mind-Clarifying Strangers: System 1 & System 2

by Jag Bhalla
March 7, 2014

Feeling is a form of thinking. Both are ways we process information, but feeling is faster. That’s the crux of Daniel Kahneman’s mind-clarifying work. It won a psychologist an economics Nobel. And strange labels helped.

In Thinking, Fast and Slow, Kahneman wrestles with flawed ideas about decision making. “Social scientists in the 1970s broadly accepted two ideas about human nature. First, people are generally rational…Second, emotions…explain most of the occasions on which people depart from rationality.” But research has “traced [systematic] errors to the… machinery of cognition…rather than corruption…by emotion.”

Kahneman sidesteps centuries of confusion (and Freudian fictions) by using new—hence undisputed—terms: the brilliantly bland “System 1” and “System 2.” These strangers help by forcing you to ask about their attributes. System 1 “is the brain’s fast, automatic, intuitive approach, System 2 “the mind’s slower, analytical mode, where reason dominates.” Kahneman says “System 1 is...more influential…guiding…[and]...steering System 2 to a very large extent.”

The measurable features of System 1 and System 2 cut across prior categories. Intuitive information-processing has typically been considered irrational, but System 1’s fast thinking is often logical and useful (“intuition is nothing more and nothing less than recognition”). Conversely, despite being conscious and deliberate System 2 can produce poor (sometimes irrational) results.

Kahneman launched behavioral economics by studying these systematic “cognitive biases.” He was astonished that economists modeled people as “rational, selfish, with tastes that don’t change,” when to psychologists “it is self-evident that people are neither fully rational nor completely selfish, and that their tastes are anything but stable.”

Kahneman’s potentially paradigm-tipping work has limitations. It is light on evolution, e.g focusing on numerically framed decisions discounts that we didn't evolve to think numerically. Math is a second nature skill, requiring much System 2 training (before becoming a System 1 skill). Also, we evolved to often act without System 2 consciously deciding (habits are triggered by System 1). Indeed cognitive biases might be bad System 1 habits rather than built in brain bugs. And cognitive biases have two sources of error, the observed behavior and what economists suppose is “rational.”

Those limitations aside, whenever pondering cognition, bear in mind the distinct traits of System 1 and System 2. Mapping mental skills (and the mini-skills they consist of) onto those labels can clarify your thinking about thinking.


~ Illustration by Julia Suits, The New Yorker Cartoonist & author of The Extraordinary Catalog of Peculiar Inventions.

Sunday, March 09, 2014

Alejandra Sel - Predictive Codes of Interoception, Emotion, and the Self - A Commentary on Anil K. Seth


This is an interesting discussion between Anil K Seth and Alejandra Sel on Seth's recent paper in Trends in Cognitive Science (pdf). Here is a brief summary of Seth's position, as understood by Sel:
Seth's proposal that sensory processing involves predictions is nothing new. What is new in Seth's model is that perception of internal body signals (interoception), paralleling the perception of external signals, relies on top-down predictions of the causes of the sensory input, rather than being a passive, bottom-up process.
While Sel agrees in principle with Seth's position, there remain four points [assumptions] that Sel feels the need to address before launching any studies to validate Seth's model.
1. [E]motions are defined as affective states relying on interactions between top-down interoceptive predictions and bottom-up interoceptive prediction errors.

2. Seth's model refers to the anterior insular cortex [AIC] as the key structure that generates, compares, and updates interoceptive predictions. Empirical evidence has shown that AIC houses a secondary associative area where interoceptive, exteroceptive, and motivational signals converge (Seth and Critchley, 2013).

3. [A]lthough a free-energy model of self has been proposed (Apps and Tsakiris, in press), as yet there is no evidence to suggest that self-processing follows the principles of predictive coding [PC], [as implied by Seth's model].

4. An individual's attention to the body can be significantly enhanced by the practice of Mindfulness (Farb et al., 2013), which also has the effect of enhancing both cortical responses of interoceptive attention and self-reported interoceptive awareness (Mehling et al., 2013). Within Seth's model this might increase the accuracy of interoceptive inference, emotions, and self-awareness.
The original article in only 9 pages including references, and the commentary below is brief. It's cool to see ideas proposed and addressed in an open forum.

Full Citation: 
Sel A. (2014, Mar 4). Predictive codes of interoception, emotion, and the self. Frontiers in Psychology: Cognitive Science; 5:189. doi: 10.3389/fpsyg.2014.00189

Glossary (from Seth)

  • Active inference: an extension of PC (and part of the free energy principle), which says that agents can suppress prediction errors by performing actions to bring about sensory states in line with predictions. 
  • Augmented reality: a technique in which virtual images can be combined with real-world real-time visual input to create hybrid perceptual scenes that are usually presented to a subject via a head-mounted display. 
  • Appraisal theories of emotion: a long-standing tradition, dating back to James (but not Lange), according to which emotions depend on cognitive interpretations of physiological changes. 
  • Emotion: an affective state with psychological, experiential, behavioral, and visceral components. Emotional awareness refers to conscious awareness of an emotional state. 
  • Experience of body ownership (EBO): the experience of certain parts of the world as belonging to one’s body. EBO can be distinguished into that related to body parts (e.g., a hand) and a global sense of identification with a whole body. 
  • Free energy principle: a generalization of PC according to which organisms minimize an upper bound on the entropy of sensory signals (the free energy). Under specific assumptions, free energy translates to prediction error. 
  • Generative model: a probabilistic model that links (hidden) causes and data, usually specified in terms of likelihoods (of observing some data given their causes) and priors (on these causes). Generative models can be used to generate inputs in the absence of external stimulation. 
  • Interoception: the sense of the internal physiological condition of the body. 
  • Interoceptive sensitivity: a characterological trait that reflects individual sensitivity to interoceptive signals, usually operationalized via heartbeat detection tasks. 
  • Predictive coding (PC): a data processing strategy whereby signals are represented by generative models. PC is typically implemented by functional architectures in which top-down signals convey predictions and bottom-up signals convey prediction errors. 
  • Rubber hand illusion (RHI): a classic experiment in which the experience of body ownership is manipulated via perceptual correlations such that a fake (i.e., rubber) hand is experienced as part of a subject’s body. 
  • Selfhood: the experience of being a distinct, holistic entity, capable of global self-control and attention, possessing a body and a location in space and time [64]. Selfhood operates on multiple levels – from basic physiological representations to metacognitive and narrative aspects.
  • Subjective feeling states: consciously experienced emotional states that underlie emotional awareness. 
  • Von Economo neurons (VENs): long-range projection neurons found selectively in hominid primates and certain other species. VENs are found preferentially in the AIC and ACC. 
* * * * *

    Predictive codes of interoception, emotion, and the self

    Alejandra Sel
    • Department of Psychology, Royal Holloway University of London, Egham, Surrey, UK
    A commentary on:
    Interoceptive inference, emotion, and the embodied self, by Seth, A. K. (2013). Trends Cogn. Sci. 17, 565–573. doi: 10.1016/j.tics.2013.09.007

    Interoception is the ability to perceive and integrate physiological signals from within the body. It is closely related to the autonomic system and is a key component in the generation of affective states and abstract representations of the self (Critchley et al., 2004; Ainley and Tsakiris, 2013). Seth proposes a predictive coding (PC) model of interoception that involves a free-energy based explanation of emotion awareness and selfhood. In this model, emotions, and in turn the sense of self, rely on predictions of the causes of interoceptive signals. Within this framework, the interoceptive system minimizes free-energy, or the discrepancy between predictions and interoceptive signals. Free-energy can be minimized either by updating predictions about the causes of the sensory signals (perceptual updating), or by acting to change autonomic states such that bodily states are more predictable (active inference).

    The free-energy principle is currently in vogue in neuroscience. We are no longer strangers to the idea that perception is an active iterative process between abstract representations (predictions) and sensory feedback (prediction errors) (Clark, 2013). The basic idea of PC in the cognitive sciences began with the notion of neural energy (Helmholtz, 1860) and it has been present since in the form of theoretical proposals and empirical findings, especially in the visual domain (Lee and Mumford, 2003). Therefore Seth's proposal that sensory processing involves predictions is nothing new. What is new in Seth's model is that perception of internal body signals (interoception), paralleling the perception of external signals, relies on top-down predictions of the causes of the sensory input, rather than being a passive, bottom-up process.

    Is then Seth's interoceptive inference model an interesting proposal to explain emotion awareness and selfhood? My opinion is yes and that it is worth investigating. However, there are some aspects to consider before designing studies to empirically test Seth's model.

    Seth's model builds on three main assumptions. First, emotions are defined as affective states relying on interactions between top-down interoceptive predictions and bottom-up interoceptive prediction errors. Following the principles of PC, there is a constant attempt to minimize the discrepancy between the predicted and the actual sensory events, either through updating perceptual expectations or through active inference (Friston et al., 2010). As Seth nicely explains, active inference in interoception occurs when predictions are transcribed into reference points that trigger autonomic homeostatic regulation, occurring when the weight of the error is low and attention to errors is attenuated (Gu et al., 2013).

    Fortunately, advances on biomedical tools allow us to experimentally monitor the body's physiological signals. Although, some methodological challenges still remain when investigating interoception. This general issue may also impact on PC studies of interoception. However, applying PC to interoception, as proposed in Seth's model, may allow us to overcome these challenges. The main argument of PC is that all sensory systems are linked by working under identical code schemes (Friston and Kiebel, 2009). Therefore, Seth's PC model allows us to apply knowledge from visual and other domains to investigate brain and behavioral mechanisms of interoception. Neuroimaging studies have demonstrated direct evidence of PC in visual brain areas (Egner et al., 2010; Wyart et al., 2012). Likewise, Seth's anatomical predictions (i.e., anterior insular cortex -AIC) can be tested by using multivoxel pattern analysis approaches, in combination with orthogonal experimental designs where the stimulus presentation probability is held constant in all conditions (Egner et al., 2010).

    The second assumption in Seth's model refers to the AIC as the key structure that generates, compares, and updates interoceptive predictions. Empirical evidence has shown that AIC houses a secondary associative area where interoceptive, exteroceptive, and motivational signals converge (Seth and Critchley, 2013). An important principle of PC explains that the surprisal generated in one unimodal system can be explained away by inferences in other system via high-order neural areas (Apps and Tsakiris, in press). Considering the multimodal nature of the AIC, one could suggest that the errors in the interoceptive signal can be explained by exteroceptive inferences (or vice versa) and that the interoceptive generative models are only a part of the way the system explains errors. Whether the AIC exclusively codes the surprisal evoked by interoceptive signals or, alternatively, if the AIC is involved in top-down general predictions directed to a more specialized interoceptive circuit, still remain open questions.

    The third crucial aspect of Seth's model is the concept of selfhood. Seth has employed the idea that selfhood is formed by the integration of predictive interoceptive and exteroceptive signals (Tajadura-Jimenez and Tsakiris, in press). Individual differences in the accuracy of interoceptive awareness influence integration of interoceptive and exteroceptive information, as shown by studies in body illusions (Tsakiris et al., 2011). Individuals with low accuracy show more susceptibility to body illusions, which Seth interprets as lower precision-weighting of interoceptive prediction errors. However, although a free-energy model of self has been proposed (Apps and Tsakiris, in press), as yet there is no evidence to suggest that self-processing follows the principles of PC.

    Another crucial factor that may influence interoceptive awareness, and therefore self-awareness, is attention. In PC, attention is considered to be a mechanism that optimizes the precision of prediction errors during hierarchical inference (Feldman and Friston, 2010). For example, studies in vision have demonstrated that attention enhances the neural specificity for expected vs. unexpected stimuli in visual cortex (Jiang et al., 2013). Similarly, directing attention toward internal body signals might increase the precision of interoceptive prediction errors and therefore improve interoceptive awareness. An individual's attention to the body can be significantly enhanced by the practice of Mindfulness (Farb et al., 2013), which also has the effect of enhancing both cortical responses of interoceptive attention and self-reported interoceptive awareness (Mehling et al., 2013). Within Seth's model this might increase the accuracy of interoceptive inference, emotions, and self-awareness.

    Therefore, I agree with Seth's proposal that the brain is a prediction machine that integrates interoceptive and exteroceptive information in a Bayesian way. However, future research is needed to elucidate the internal properties of the interoceptive inference.

    Acknowledgments

    This work was supported by the European Research Council Starting Investigator Grant (ERC-2010-StG-262853). I would like to thank Manos Tsakiris and the reviewer for their insightful comments and Lara Maister and Vivien Ainley for their help with manuscript editing.


    References


    Ainley, V., and Tsakiris, M. (2013). Body conscious? Interoceptive awareness, measured by heartbeat perception, is negatively correlated with self-objectification. PLoS ONE 8:e55568. doi: 10.1371/journal.pone.0055568 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Apps, M., and Tsakiris, M. (in press). The free-energy self: a predictive coding account of self-recognition. Neurosci. Biobehav. Rev. doi: 10.1016/j.neubiorev.2013.01.029 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 36, 181–204. doi: 10.1017/S0140525X12000477 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Critchley, H. D., Wiens, S., Rotshtein, P., Ohman, A., and Dolan, R. J. (2004). Neural systems supporting interoceptive awareness. Nat. Neurosci. 7, 189–195. doi: 10.1038/nn1176 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Egner, T., Monti, J. M., and Summerfield, C. (2010). Expectation and surprise determine neural population responses in the ventral visual stream. J. Neurosci. 30, 16601–16608. doi: 10.1523/jneurosci.2770-10.2010 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Farb, N. A. S., Segal, Z. V., and Anderson, A. K. (2013). Mindfulness meditation training alters cortical representations of interoceptive attention. Soc. Cogn. Affect. Neurosci. 8, 15–26. doi: 10.1093/scan/nss066 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Feldman, H., and Friston, K. J. (2010). Attention, uncertainty, and free-energy. Front. Hum. Neurosci. 4:215. doi: 10.3389/fnhum.2010.00215 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Friston, K., and Kiebel, S. (2009). Predictive coding under the free-energy principle. Philos. Trans. R. Soc. B Biol. Sci. 364, 1211–1221. doi: 10.1098/rstb.2008.0300 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Friston, K. J., Daunizeau, J., Kilner, J., and Kiebel, S. J. (2010). Action and behavior: a free-energy formulation. Biol. Cybern. 102, 227–260. doi: 10.1007/s00422-010-0364-z Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Gu, X., Hof, P. R., Friston, K. J., and Fan, J. (2013). Anterior insular cortex and emotional awareness. J. Comp. Neurol. 521, 3371–3388. doi: 10.1002/cne.23368 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Helmholtz, L. F. V. (1860). Handbuch der Physiologischen Optik [Handbook of Physiological Pptics]. Leipzig: Voss.

    Jiang, J., Summerfield, C., and Egner, T. (2013). Attention sharpens the distinction between expected and unexpected percepts in the visual brain. J. Neurosci. 33. 18438–18447. doi: 10.1523/jneurosci.3308-13.2013 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Lee, T. S., and Mumford, D. (2003). Hierarchical Bayesian inference in the visual cortex. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 20, 1434–1448. doi: 10.1364/JOSAA.20.001434 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Mehling, W. E., Daubenmier, J., Price, C. J., Acree, M., Bartmess, E., and Stewart, A. L. (2013). Self-reported interoceptive awareness in primary care patients with past or current low back pain. J. Pain Res. 6, 403–418. doi: 10.2147/JPR.S42418 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Seth, A. K., and Critchley, H. D. (2013). Extending predictive processing to the body: emotion as interoceptive inference. Behav. Brain Sci. 36, 227–228. doi: 10.1017/S0140525X12002270 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Tajadura-Jimenez, A., and Tsakiris, M. (in press). Balancing the “Inner” and the “Outer” self: interoceptive sensitivity modulates self-other boundaries. J. Exp. Psychol. Gen. doi: 10.1037/a0033171 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Tsakiris, M., Tajadura-Jiménez, A., and Costantini, M. (2011). Just a heartbeat away from one's body: interoceptive sensitivity predicts malleability of body-representations. Proc. Biol. Sci. 278, 2470–2476. doi: 10.1098/rspb.2010.2547 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    Wyart, V., Nobre, A. C., and Summerfield, C. (2012). Dissociable prior influences of signal probability and relevance on visual contrast sensitivity. Proc. Natl. Acad. Sci. U.S.A. 109, 3593–3598. doi: 10.1073/pnas.1120118109 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

    In Conversation with… Steven Pinker (via Mosaic)

    Mosaic is a new open access online science magazine produced by the Wellcome Trust. In the first collection of stories they have posted, one is an interview with the always interesting (and sometimes infuriating) Steven Pinker of Harvard University - and author of How the Mind Works (1997) and The Blank Slate: The modern denial of human nature (2002) [both nominated for the Pulitzer Prize], and The Better Angels of Our Nature: Why Violence Has Declined (2011).

    Those who follow Pinker's writing will find little new here, but for those who do not this article/interview provides an excellent overview of the man and his career.

    In Conversation with… Steven Pinker

    Oliver Burkeman explores human nature, violence, feminism and religion with one of the world’s most controversial cognitive scientists. Can he dent Steven Pinker’s optimism?

    March 4, 2014

    Stephen Pinker holding a piece of the Berlin Wall

    In the week that I interview the cognitive psychologist and bestselling author Steven Pinker in his office at Harvard, police release the agonising recordings of emergency calls made during the Sandy Hook school shootings. In Yemen, a suicide attack on the defence ministry kills more than 50 people. An American teacher is shot dead as he goes jogging in Libya. Several people are killed in riots between political factions in Thailand, and peacekeepers have to be dispatched to the Central African Republic.

    In short, it’s not hard to find anecdotes that seem to contradict a guiding principle behind much of Pinker’s work – which is that science and human reason are, slowly but unmistakably, making the world a better place.

    Repeatedly during our conversation, I seek to puncture the silver-haired professor’s quietly relentless optimism. If the ongoing tolls of war and violence can’t do it, what about the prevalence in America of unscientific beliefs about the origins of life? Or the devastating potential impacts of climate change, paired with the news – also released in the week we meet – that 23 per cent of Americans don’t believe it’s happening, up seven percentage points in just eight months?

    I try. But it proves far from easy.

    At first glance Pinker’s implacable optimism, though in keeping with his sunny demeanour and stereotypically Canadian friendliness, presents a puzzle. His stellar career – which includes two Pulitzer Prize nominations for his books How the Mind Works (1997) and The Blank Slate: The modern denial of human nature (2002) – has been defined, above all, by support for the fraught notion of human nature: the contention that genetic predispositions account in hugely significant ways for how we think, feel and act, why we behave towards others as we do, and why we excel in certain areas rather than others.

    This has frequently drawn Pinker into controversy – as in 2005, when he offered a defence of Larry Summers, then Harvard’s President, who had suggested that the under-representation of women in science and maths careers might be down to innate sex differences.

    “The possibility that men and women might differ for reasons other than socialisation, expectations, hidden biases and barriers is very close to an absolute taboo,” Pinker tells me. He faults books such as Lean In, by Facebook’s chief operating officer, Sheryl Sandberg, for not entertaining the notion that men and women might not have “identical life desires”. But he also insists that taking the possibility of such differences seriously need not lend any justification to policies or prejudices that exclude women from positions of expertise or power.

    “Even if there are sex differences, they’re differences in the means of two overlapping populations, so for any [stereotypically female] trait you care to name, there’ll be many men who are more extreme than most women, and vice versa. So as a matter of both efficiency and of fairness, you should treat every individual as an individual, and not prejudge them.”

    It is generally assumed that anyone who takes human nature seriously will be a fatalist, and probably politically conservative. If we’re pre-wired to be how we are, the reasoning goes, we might as well accept it and give up on hopes of any change. One way of interpreting Pinker’s most recent book, The Better Angels of Our Nature, is as an 800-page doorstopper of a riposte to this idea. Not only can we change, but when it comes to arguably the most important measure of improvement – the violence we inflict on each other – we actually have changed, to an almost incredible degree.


    “I had very often come across the objection that if human nature exists – including some ugly motives like revenge, dominance, greed and lust – then that would imply it’s pointless to try to improve the human condition, because humans are innately depraved,” says the 59-year-old, whose distinctive appearance – today he is sporting black cowboy boots – frequently gets him stopped in the street. “Or there’s an alternative objection: that we ought to improve our lot, and therefore, it cannot be the case that human nature exists.”

    Pinker puts all this down to “a fear that acknowledging human nature would subvert any attempt to improve the human condition”. Better Angels argues that this is a misunderstanding of what human nature means. It shouldn’t be identified with a certain set of behaviours; rather, we have a complex variety of predispositions, violent and peaceful, that can be activated in different ways by different environments. The book’s title, drawn from Abraham Lincoln’s first inaugural address, is “a poetic allusion to the parts of human nature that can overcome the nastier parts,” he explains.

    But Better Angels is notable above all for the sheer weight of evidence it amasses, culled from forensic archaeology, government statistics, town records, and studies by ‘atrocitologists’ of historical genocides and other mass killings. The book demonstrates that homicides, calculated as a proportion of the world’s population at any given point, have plummeted; when you look at the numbers this way, World War II wasn’t the worst single atrocity in history, but more like the tenth.

    Pinker dwells, in sometimes unnerving detail, on horrifying methods of torture once considered routine. “The Heretic’s Fork had a pair of sharp spikes at each end,” he writes, in what is definitely not the most appalling passage. “One end was propped under the victim’s jaw and the other at the base of his neck, so that as his muscles became exhausted he would impale himself in both places.”

    “Human nature or no human nature,” Pinker says, “it’s just a brute fact that we don’t throw virgins into volcanoes any more. We don’t execute people for shoplifting a cabbage. And we used to.”

    He offers a multi-pronged explanation for this decline, from the rise of the state and of cities, to literacy, trade and democracy. Whether this constitutes an across-the-board endorsement of scientific rationality may be debated. (“Like other latter-day partisans of ‘Enlightenment values’,” the critic John Gray wrote, “Pinker prefers to ignore the fact that many Enlightenment thinkers have been doctrinally anti-liberal, while quite a few have favoured the large-scale use of political violence”.) But it’s hard to question the basic finding that your chances of meeting a sticky end, all else being equal, are vastly lower in 2014 than they were in 1014.

    If Pinker’s message has proved hard for some to swallow, that may be because our standards are improving even faster than our actual behaviour, giving the misleading impression that things are getting worse. “Hate attacks on Muslims are deplorable, and they ought to be combated, and it reflects well that we’re concerned when they do occur,” Pinker says. “But by the standards of past pogroms and ethnic cleansings, they’re in the noise: this is not a phenomenon of the same magnitude as the ethnic expulsions of decades past.”

    We’ve even witnessed the emergence of whole new categories of condemnable acts. Take bullying, says Pinker: “The President of the United States gave a speech denouncing bullying! When I was a child, this would have been worthy of satire.” As we continue to construct a social environment that activates more and more of our peaceable dispositions, and fewer and fewer of our aggressive ones, the remaining instances of bad behaviour stick out like ever-sorer thumbs.

    What’s more, evolutionary psychology, one of Pinker’s several specialisms, can explain why. For reasons that long ago made excellent sense, our brains are adapted to focus on bad news over good, vivid threats over vague ones, and recent horrors over historically distant atrocities. Our elevated levels of anxiety about the future might actually be a sign of reason’s triumph.

    “It could be interpreted as a sign of our growing up,” Pinker says. “We worry about more things, because we know that there are more things to worry about. Every time we go to a restaurant, we worry we might be ingesting saturated fats, or carcinogens. For my parents’ generation, the main concern about food was: ‘Does it taste delicious?’”

    §

    Many of Pinker’s most ambitious ideas about science and human morality have their origins in a seemingly trivial observation about irregular verbs. Building on the groundbreaking linguistic ideas of Noam Chomsky, Pinker proposed that certain simple language errors committed by young children “capture the essence of language” itself.

    When a three-year-old says “I eated the ice cream” or “we holded the kittens”, she is, Pinker observes, following a grammar rule correctly, and making a mistake only because we happen to suspend the rule for those verbs in English. Since she couldn’t have learned “eated” or “holded” by simply imitating adult speakers, this points to the presence of innate cognitive machinery – a “language instinct”, to quote the title of Pinker’s 1994 book – that enables a young child to construct novel linguistic forms by following rules.

    (Irregular verbs have an even more intimate role in Pinker’s life: he met his wife, the philosopher Rebecca Goldstein, through an email exchange after he mentioned her correct use of the past participle ‘stridden’ in his book Words and Rules.)

    Years later, in 2007’s The Stuff of Thought, he extended this reasoning to the structures of “mentalese”, the wordless “language of thought” that he argues we use to think in. When, for example, we use spatial language to talk about time – as in “a long day”, or bringing a meeting “forward” – might we be relying on an in-built, pre-linguistic tendency to think about the abstract notion of time by analogy to space, something far more concretely graspable to an early human concerned with food, shelter and survival?

    This view of the mind – as a set of modules evolved to confront specific cognitive challenges on the Pleistocene savannah – is most ambitiously on display in How the Mind Works, a dazzling effort to “reverse engineer” all of our mental capacities, asking for what purposes each might have been selected. Love, humour, war, jealousy, the disgust we feel at the idea of eating certain animals but not others, religous food taboos, compulsive lying: none of them escape the blade of Pinker’s rationalist scalpel.

    Assuming you buy the book’s general approach, it is almost impossible after reading it to cling to the romantic notion that there might be more to our inner lives than the brute facts of biology and natural selection. The notable exception is how the brain causes sentience, or conscious awareness: after a long discussion on the topic, Pinker finally concludes: “Beats me!” There’s reason to believe, he argues, that humans may simply lack the mental capacity ever to solve the mind–body problem.

    But the broader philosophical question – how far science can, or should, reach into the life of the mind – got a disputatious airing last year, when Pinker wrote an essay for the New Republic entitled ‘Science is not your enemy’. It was motivated in part by reports on both sides of the Atlantic about declining student enrolments in humanities subjects, and was Pinker’s intervention in the long-running debate over ‘scientism’: are science and scientists guilty of attempting to colonise areas of intellectual life where they don’t belong?

    Rather than denying that this was a real phenomenon, as numerous scientists have, Pinker audaciously claimed it was a good thing – providing you defined ‘scientism’ correctly. Humanities scholars had themselves to blame, he implied, for the decline of their fields: by insisting on remaining inside departmental silos, resistant to new approaches, they’d helped guarantee their growing irrelevance. Science was not engaged upon “an imperialistic drive to occupy the humanities,” he wrote. “The promise of science is to enrich and diversify the intellectual tools of humanistic scholarship, not to obliterate them.”

    In a furious response, entitled ‘Crimes against humanities’, the New Republic’s literary editor, Leon Wieseltier, accused Pinker of denying the very possibility of valid yet non-scientific knowledge. How absurd, he argued, to imagine that a scientific analysis of a painting – a chemical breakdown of its pigments and textures, and so on – could be the only thing worth saying about it. Pinker calls this a “paranoid” interpretation of his argument. “How could an understanding of perception of colour, of form, of lighting, of shading, of content such as faces and landscapes not enrich our understanding of art?”

    Yet if Wieseltier’s retort was overheated, he may still have a point. Pinker wasn’t – and isn’t – merely calling on scholars from different disciplines to talk to each other more. His argument is that any scholar committed to the idea that “the world is intelligible” is doing science. “The great thinkers of the Age of Reason and the Enlightenment were scientists,” he wrote, naming various philosophers.

    It seems to follow from this that non-scientific scholarship doesn’t help make the world intelligible, but Pinker will have none of it. “I’m married to a humanities scholar. I collaborate with humanities scholars. I’m in fields like linguistics, where deans don’t know whether it’s the humanities or not,” he says. “Many humanities scholars – particularly here at Harvard and MIT, but elsewhere too – find it very exciting that there might be new ways of approaching old problems, and an influx of new ideas. I mean, who in their right mind could defend insularity as a principle for excellence in anything?”

    Of course there are ways of studying a painting, he concedes, that result in worthwhile insights that can’t be described as scientific. But, he tells me, “I think the humanities would do themselves a favour by not insisting on staying in a silo. If they are wanting to attract the smartest minds from the next generation, it would be wise to hold out the promise that there will be new ways of understanding things – the same expansive mindset that attracts smart, ambitious people to the sciences could also attract them to the humanities.

    “That it isn’t just a question of reinterpreting the same works of art, with the same methods, over and over again,” he concludes. “I don’t see why humanistic scholarship can’t make progress. Wieseltier seemed to insist that it can’t, but I don’t think most people in the humanities would agree with this. He claims to speak for the humanities. But I can imagine a lot of people in the humanities saying: ‘Speak for yourself!’”

    §

    Pinker was born in 1954 in Montréal, and raised in that bilingual city’s English-speaking Jewish community (his sister, Susan, is also a psychologist, of the clinical rather than research variety). It’s tempting to try to attribute his subsequent intellectual interests to the milieu of his upbringing. Did his focus on language emerge from having grown up in a linguistic battleground? Did his conception of the mind as a complex assemblage of modules, each designed for specific purposes, arise from inspecting the machines his grandfather used as a garment manufacturer? Was it growing up in the 1960s, when many progressives embraced a ‘blank slate’ model of humans as a precondition for radical change, that prompted him to rebel against that notion in his work?

    Such speculation can be hazardous when it concerns an evolutionary psychologist who believes that genetic heritage is more important than parenting or peer-group influence. But how far, really, does Pinker believes his career trajectory was influenced by his genes, and how far by his upbringing?

    “There are parallel universes to this one [in which] I wouldn’t have written The Better Angels of Our Nature or The Language Instinct,” he says. “But I’d probably be in the sciences of something human. I probably wouldn’t have been a physicist: I’m too much of a yenta, too interested in humans.” On the other hand, “I probably wouldn’t have been a literary critic.”

    In this universe, Pinker studied experimental psychology at McGill University in Montréal, then did his PhD in the same field at Harvard; he has spent the rest of his career there and at MIT, just down the street.

    Wherever he got them from, Pinker’s dispositions include a prodigious appetite for work. As well as being a self-confessed micromanager in his teaching work, as Harvard’s Johnstone Family Professor of Psychology, he’s usually either pursuing a full schedule of research, speaking and article-writing, or plunging into months-long marathons of book-writing.

    “When I write a book, it’s almost all-consuming,” he says, recalling the year he spent in his house on Cape Cod writing The Better Angels, seven days a week, and sometimes until three in the morning. (He’d spent the previous year doing little but reading in preparation for it.) “I do try to exercise. I try to spend some time being a human being with my wife” – as recreation, he and Goldstein ride a tandem bicycle and paddle a tandem kayak. “Fortunately, she’s also a very intense writer, so she sympathises.”

    The couple do not have children, a fact Pinker sometimes uses to illustrate the non-determinative nature of genetic predispositions. (He might be predisposed, thanks to natural selection, to reproduce, but he’s used his frontal lobe, a crucial part of his evolutionary inheritance, to decide not to.) “Some things have to give,” Pinker says. “I’m not on Facebook, I don’t see a whole lot of movies, I don’t watch much TV – not because I consider myself above TV, I just don’t have time. And I don’t have a whole lot of face-to-face meetings.” The Pinker–Goldstein house is sometimes almost silent, except for keyboard-tapping, for days and weeks on end.

    Both partners are self-described, out-and-proud atheists. Yet while Pinker has received awards from atheist organisations for his support for their cause, it’s notable that he opts not to focus on religion, or its opponents, in his work. A Pinker book on the topic would surely have sold impressively, anointing him the fifth horseman of New Atheism – but “there’s just not enough intellectual content in there, at least on mind, for me to explore,” he says. “I think [Richard] Dawkins has done a fine job; I don’t think I have anything to add to that.”

    Pinker’s relative lack of engagement in the modern wars over belief shouldn’t be taken as any endorsement of Stephen Jay Gould’s argument that religion and science are “non-overlapping magisteria”, each a legitimate domain of authority that should keep out of the other’s business. “As a matter of fact,” Pinker says, “religions have concerned themselves with the subject-matter of science… All the world’s major religions have origin myths, they have theories of psychology, of what animates a body that allows it to make decisions. And I think science has competed on that territory successfully: it has shown that those explanations are factually incorrect.”


    Nor, in his eyes, should religion have any franchise on morality: “That’s not to say that morality is going to be determined by biology – it could be – but rather that it’s the subject-matter of secular moral philosophy.”

    Does any kind of spirituality, however non-religiously defined, play a role in his life?

    “I’m afraid of using the word ‘spiritual’,” he says. “I mean, I have a sense of awe and wonder – a sense of intellectual vertigo in pondering certain questions. I hesitate to use the word ‘spiritual’ just because it comes with so much baggage about the supernatural.”

    Pinker’s next book, The Sense of Style, will be a style manual for writers incorporating insights from cognitive psychology and linguistics. For example, it will offer advice on how to get around “the curse of knowledge” – the difficulty writers face in being unable to place themselves in the mind of a reader who doesn’t already know as much as the writer knows. Or the question of how to relate to one’s imagined reader: insights from psychology, Pinker will argue, show that the appropriate metaphor to keep in mind is one of vision – that “the stance you take as a writer ought to be to pretend that you’re pointing out something in the world that your reader could see with his own eyes if only he were given an unobstructed view”.

    To the extent that these, or any other findings, rely on explanations from evolutionary psychology, they’re vulnerable to a recurrent criticism: aren’t evolutionary psychologists guilty of simply constructing retrospectively satisfying ‘just-so stories’, with no way of showing whether or not they’re the truth?

    In one memorable passage in How the Mind Works, Pinker suggests that our cultural tendency to reward successful executives (and Harvard academics) with high-floor offices might result from an adaptive preference for good views of the surrounding territory, the better to defend against attackers. But in an alternative world where we rewarded executives with offices in the basement, couldn’t you construct a mirror-image explanation, about the benefits of being able to hide out of sight?

    For Pinker, the crucial question is whether a hypothesis can be tested. First, he says, you’d have to establish – by means of psychology experiments, or surveys of property prices – that there was indeed a culturally widespread, present-day preference for high floors with good views. Then you’d have to scour the historical evidence: for example, data from studies of “tribal warfare, on whether there’s been a historically continued preference for high vantage points over bunkers and burrows”. Sufficient data showing a preference through history and across cultures, and in contexts of life and death, might amount to good reason to accept your hypothesis.

    Once more, Pinker navigates his way through my critique with ease. All attempts to puncture his unique brand of rational optimism – his confidence that careful scientific thinking, consistently applied, will carry humanity towards a future of reason, peace and flourishing – end in failure.

    Even climate change, that archetypal case of humanity remaining inert in the face of scientific knowledge, doesn’t do it. “I think it would be foolhardy to say we’ll solve it, but I don’t think it’s foolhardy to say we can solve it,” Pinker says. “History tells us there have been cases in which the global community has adopted agreements to better collective welfare: the ban on atmospheric nuclear testing would be an example. The ban on commercial whaling. The end of piracy and privateering as a legitimate form of international competition. The banning of chlorofluorocarbons.”

    In this domain as elsewhere, in Pinker’s judgement, science plus judicious optimism may yet win the day. Or, as he puts it: “We’re not on a trolley-track to oblivion.”