Showing posts with label Mosaic. Show all posts
Showing posts with label Mosaic. Show all posts

Friday, May 29, 2015

Hacking the Nervous System - Vagal Nerve Stimulation to Control Inflammation (from Mosaic)

From Mosaic, this is an excellent article on the medical aspect of polyvagal theory. Those who work in the psychological trauma field already know a little or a lot about polyvagal theory from the work of Stephen Porges (as well as Bessel van der Kolk and Peter Levine, who have done a lot to get Porges' work better known) - see Porges' The Polyvagal Theory: Neurophysiological Foundations of Emotions, Attachment, Communication, and Self-regulation.

In this piece by Gaia Vince, the role of the vagus nerve in physical issues, such as autoimmune disorders is examined. Controlling inflammation through vagal stimulation could be a HUGE breakthrough in treating nearly all forms of disease (which are inflammatory illnesses at the molecular level).


© Job Boot

Hacking the nervous system



One nerve connects your vital organs, sensing and shaping your health. If we learn to control it, the future of medicine will be electric. 


By Gaia Vince.


When Maria Vrind, a former gymnast from Volendam in the Netherlands, found that the only way she could put her socks on in the morning was to lie on her back with her feet in the air, she had to accept that things had reached a crisis point. “I had become so stiff I couldn’t stand up,” she says. “It was a great shock because I’m such an active person.”

It was 1993. Vrind was in her late 40s and working two jobs, athletics coach and a carer for disabled people, but her condition now began taking over her life. “I had to stop my jobs and look for another one as I became increasingly disabled myself.” By the time she was diagnosed, seven years later, she was in severe pain and couldn’t walk any more. Her knees, ankles, wrists, elbows and shoulder joints were hot and inflamed. It was rheumatoid arthritis, a common but incurable autoimmune disorder in which the body attacks its own cells, in this case the lining of the joints, producing chronic inflammation and bone deformity.

Waiting rooms outside rheumatoid arthritis clinics used to be full of people in wheelchairs. That doesn’t happen as much now because of a new wave of drugs called biopharmaceuticals – such as highly targeted, genetically engineered proteins – which can really help. Not everyone feels better, however: even in countries with the best healthcare, at least 50 per cent of patients continue to suffer symptoms.

Like many patients, Vrind was given several different medications, including painkillers, a cancer drug called methotrexate to dampen her entire immune system, and biopharmaceuticals to block the production of specific inflammatory proteins. The drugs did their job well enough – at least, they did until one day in 2011, when they stopped working.

“I was on holiday with my family and my arthritis suddenly became terrible and I couldn’t walk – my daughter-in-law had to wash me.” Vrind was rushed to hospital, where she was hooked up to an intravenous drip and given another cancer drug, one that targeted her white blood cells. “It helped,” she admits, but she was nervous about relying on such a drug long-term.

Luckily, she would not have to. As she was resigning herself to a life of disability and monthly chemotherapy, a new treatment was being developed that would profoundly challenge our understanding of how the brain and body interact to control the immune system. It would open up a whole new approach to treating rheumatoid arthritis and other autoimmune diseases, using the nervous system to modify inflammation. It would even lead to research into how we might use our minds to stave off disease.

And, like many good ideas, it came from an unexpected source.



© Job Boot

The nerve hunter

Kevin Tracey, a neurosurgeon based in New York, is a man haunted by personal events – a man with a mission. “My mother died from a brain tumour when I was five years old. It was very sudden and unexpected,” he says. “And I learned from that experience that the brain – nerves – are responsible for health.” This drove his decision to become a brain surgeon. Then, during his hospital training, he was looking after a patient with serious burns who suddenly suffered severe inflammation. “She was an 11-month-old baby girl called Janice who died in my arms.”

These traumatic moments made him a neurosurgeon who thinks a lot about inflammation. He believes it was this perspective that enabled him to interpret the results of an accidental experiment in a new way.
In the late 1990s, Tracey was experimenting with a rat’s brain. “We’d injected an anti-inflammatory drug into the brain because we were studying the beneficial effect of blocking inflammation during a stroke,” he recalls. “We were surprised to find that when the drug was present in the brain, it also blocked inflammation in the spleen and in other organs in the rest of the body. Yet the amount of drug we’d injected was far too small to have got into the bloodstream and travelled to the rest of the body.” 

After months puzzling over this, he finally hit upon the idea that the brain might be using the nervous system – specifically the vagus nerve – to tell the spleen to switch off inflammation everywhere.
It was an extraordinary idea – if Tracey was right, inflammation in body tissues was being directly regulated by the brain. Communication between the immune system’s specialist cells in our organs and bloodstream and the electrical connections of the nervous system had been considered impossible. Now Tracey was apparently discovering that the two systems were intricately linked.

The first critical test of this exciting hypothesis was to cut the vagus nerve. When Tracey and his team did, injecting the anti-inflammatory drug into the brain no longer had an effect on the rest of the body. The second test was to stimulate the nerve without any drug in the system. “Because the vagus nerve, like all nerves, communicates information through electrical signals, it meant that we should be able to replicate the experiment by putting a nerve stimulator on the vagus nerve in the brainstem to block inflammation in the spleen,” he explains. “That’s what we did and that was the breakthrough experiment.”



© Job Boot

The wandering nerve

The vagus nerve starts in the brainstem, just behind the ears. It travels down each side of the neck, across the chest and down through the abdomen. ‘Vagus’ is Latin for ‘wandering’ and indeed this bundle of nerve fibres roves through the body, networking the brain with the stomach and digestive tract, the lungs, heart, spleen, intestines, liver and kidneys, not to mention a range of other nerves that are involved in speech, eye contact, facial expressions and even your ability to tune in to other people’s voices. It is made of thousands and thousands of fibres and 80 per cent of them are sensory, meaning that the vagus nerve reports back to your brain what is going on in your organs.


Operating far below the level of our conscious minds, the vagus nerve is vital for keeping our bodies healthy. It is an essential part of the parasympathetic nervous system, which is responsible for calming organs after the stressed ‘fight-or-flight’ adrenaline response to danger. Not all vagus nerves are the same, however: some people have stronger vagus activity, which means their bodies can relax faster after a stress. 
The strength of your vagus response is known as your vagal tone and it can be determined by using an electrocardiogram to measure heart rate. Every time you breathe in, your heart beats faster in order to speed the flow of oxygenated blood around your body. Breathe out and your heart rate slows. This variability is one of many things regulated by the vagus nerve, which is active when you breathe out but suppressed when you breathe in, so the bigger your difference in heart rate when breathing in and out, the higher your vagal tone.

Research shows that a high vagal tone makes your body better at regulating blood glucose levels, reducing the likelihood of diabetes, stroke and cardiovascular disease. Low vagal tone, however, has been associated with chronic inflammation. As part of the immune system, inflammation has a useful role helping the body to heal after an injury, for example, but it can damage organs and blood vessels if it persists when it is not needed. One of the vagus nerve’s jobs is to reset the immune system and switch off production of proteins that fuel inflammation. Low vagal tone means this regulation is less effective and inflammation can become excessive, such as in Maria Vrind’s rheumatoid arthritis or in toxic shock syndrome, which Kevin Tracey believes killed little Janice.

Having found evidence of a role for the vagus in a range of chronic inflammatory diseases, including rheumatoid arthritis, Tracey and his colleagues wanted to see if it could become a possible route for treatment. The vagus nerve works as a two-way messenger, passing electrochemical signals between the organs and the brain. In chronic inflammatory disease, Tracey figured, messages from the brain telling the spleen to switch off production of a particular inflammatory protein, tumour necrosis factor (TNF), weren’t being sent. Perhaps the signals could be boosted?

He spent the next decade meticulously mapping all the neural pathways involved in regulating TNF, from the brainstem to the mitochondria inside all our cells. Eventually, with a robust understanding of how the vagus nerve controlled inflammation, Tracey was ready to test whether it was possible to intervene in human disease.



© Job Boot

Stimulating trial

In the summer of 2011, Maria Vrind saw a newspaper advertisement calling for people with severe rheumatoid arthritis to volunteer for a clinical trial. Taking part would involve being fitted with an electrical implant directly connected to the vagus nerve. “I called them immediately,” she says. “I didn’t want to be on anticancer drugs my whole life; it’s bad for your organs and not good long-term.”

Tracey had designed the trial with his collaborator, Paul-Peter Tak, professor of rheumatology at the University of Amsterdam. Tak had long been searching for an alternative to strong drugs that suppress the immune system to treat rheumatoid arthritis. “The body’s immune response only becomes a problem when it attacks your own body rather than alien cells, or when it is chronic,” he reasoned. “So the question becomes: how can we enhance the body’s switch-off mechanism? How can we drive resolution?”

When Tracey called him to suggest stimulating the vagus nerve might be the answer by switching off production of TNF, Tak quickly saw the potential and was enthusiastic to see if it would work. Vagal nerve stimulation had already been approved in humans for epilepsy, so getting approval for an arthritis trial would be relatively straightforward. A more serious potential hurdle was whether people used to taking drugs for their condition would be willing to undergo an operation to implant a device inside their body: “There was a big question mark about whether patients would accept a neuroelectric device like a pacemaker,” Tak says.

He needn’t have worried. More than a thousand people expressed interest in the procedure, far more than were needed for the trial. In November 2011, Vrind was the first of 20 Dutch patients to be operated on.

“They put the pacemaker on the left-hand side of my chest, with wires that go up and attach to the vagus nerve in my throat,” she says. “I waited two weeks while the area healed, and then the doctors switched it on and adjusted the settings for me.”

She was given a magnet to swipe across her throat six times a day, activating the implant and stimulating her vagus nerve for 30 seconds at a time. The hope was that this would reduce the inflammatory response in her spleen. As Vrind and the other trial participants were sent home, it became a waiting game for Tracey, Tak and the team to see if the theory, lab studies and animal trials would bear fruit in real patients. “We hoped that for some, there would be an easing of their symptoms – perhaps their joints would become a little less painful,” Tak says.

At first, Vrind was a bit too eager for a miracle cure. She immediately stopped taking her pills, but her symptoms came back so badly that she was bedridden and in terrible pain. She went back on the drugs and they were gradually reduced over a week instead.

And then the extraordinary happened: Vrind experienced a recovery more remarkable than she or the scientists had dared hope for.

“Within a few weeks, I was in a great condition,” she says. “I could walk again and cycle, I started ice-skating again and got back to my gymnastics. I feel so much better.” She is still taking methotrexate, which she will need at a low dose for the rest of her life, but at 68, semi-retired Vrind now plays and teaches seniors’ volleyball a couple of hours a week, cycles for at least an hour every day, does gymnastics, and plays with her eight grandchildren.

Other patients on the trial had similar transformative experiences. The results are still being prepared for publication but Tak says more than half of the patients showed significant improvement and around one-third are in remission – in effect cured of their rheumatoid arthritis. Sixteen of the 20 patients on the trial not only felt better, but measures of inflammation in their blood also went down. Some are now entirely drug-free. Even those who have not experienced clinically significant improvements with the implant insist it helps them; nobody wants it removed.

“We have shown very clear trends with stimulation of three minutes a day,” Tak says. “When we discontinued stimulation, you could see disease came back again and levels of TNF in the blood went up. We restarted stimulation, and it normalised again.”

Tak suspects that patients will continue to need vagal nerve stimulation for life. But unlike the drugs, which work by preventing production of immune cells and proteins such as TNF, vagal nerve stimulation seems to restore the body’s natural balance. It reduces the over-production of TNF that causes chronic inflammation but does not affect healthy immune function, so the body can respond normally to infection.

“I’m really glad I got into the trial,” says Vrind. “It’s been more than three years now since the implant and my symptoms haven’t returned. At first I felt a pain in my head and throat when I used it, but within a couple of days, it stopped. Now I don’t feel anything except a tightness in my throat and my voice trembles while it’s working.

“I have occasional stiffness or a little pain in my knee sometimes but it’s gone in a couple of hours. I don’t have any side-effects from the implant, like I had with the drugs, and the effect is not wearing off, like it did with the drugs.”



© Job Boot

Raising the tone

Having an electrical device surgically implanted into your neck for the rest of your life is a serious procedure. But the technique has proved so successful – and so appealing to patients – that other researchers are now looking into using vagal nerve stimulation for a range of other chronic debilitating conditions, including inflammatory bowel disease, asthma, diabetes, chronic fatigue syndrome and obesity.



But what about people who just have low vagal tone, whose physical and mental health could benefit from giving it a boost? Low vagal tone is associated with a range of health risks, whereas people with high vagal tone are not just healthier, they’re also socially and psychologically stronger – better able to concentrate and remember things, happier and less likely to be depressed, more empathetic and more likely to have close friendships. 
Twin studies show that to a certain extent, vagal tone is genetically predetermined – some people are born luckier than others. But low vagal tone is more prevalent in those with certain lifestyles – people who do little exercise, for example. This led psychologists at the University of North Carolina at Chapel Hill to wonder if the relationship between vagal tone and wellbeing could be harnessed without the need for implants.

In 2010, Barbara Fredrickson and Bethany Kok recruited around 70 university staff members for an experiment. Each volunteer was asked to record the strength of emotions they felt every day. Vagal tone was measured at the beginning of the experiment and at the end, nine weeks later. As part of the experiment, half of the participants were taught a meditation technique to promote feelings of goodwill towards themselves and others.

Those who meditated showed a significant rise in vagal tone, which was associated with reported increases in positive emotions. “That was the first experimental evidence that if you increased positive emotions and that led to increased social closeness, then vagal tone changed,” Kok says.

Now at the Max Planck Institute in Germany, Kok is conducting a much larger trial to see if the results they found can be replicated. If so, vagal tone could one day be used as a diagnostic tool. In a way, it already is. “Hospitals already track heart-rate variability – vagal tone – in patients that have had a heart attack,” she says, “because it is known that having low variability is a risk factor.”

The implications of being able to simply and cheaply improve vagal tone, and so relieve major public health burdens such as cardiovascular conditions and diabetes, are enormous. It has the potential to completely change how we view disease. If visiting your GP involved a check on your vagal tone as easily as we test blood pressure, for example, you could be prescribed therapies to improve it. But this is still a long way off: “We don’t even know yet what a healthy vagal tone looks like,” cautions Kok. “We’re just looking at ranges, we don’t have precise measurements like we do for blood pressure.”

What seems more likely in the shorter term is that devices will be implanted for many diseases that today are treated by drugs: “As the technology improves and these devices get smaller and more precise,” says Kevin Tracey, “I envisage a time where devices to control neural circuits for bioelectronic medicine will be injected – they will be placed either under local anaesthesia or under mild sedation.”


However the technology develops, our understanding of how the body manages disease has changed for ever. “It’s become increasingly clear that we can’t see organ systems in isolation, like we did in the past,” says Paul-Peter Tak. “We just looked at the immune system and therefore we have medicines that target the immune system. 
“But it’s very clear that the human is one entity: mind and body are one. It sounds logical but it’s not how we looked at it before. We didn’t have the science to agree with what may seem intuitive. Now we have new data and new insights.”

And Maria Vrind, who despite severe rheumatoid arthritis can now cycle pain-free around Volendam, has a new lease of life: “It’s not a miracle – they told me how it works through electrical impulses – but it feels magical. I don’t want them to remove it ever. I have my life back!”

Tuesday, July 29, 2014

Marek Kohn: Smart and Smarter Drugs (via Mosaic)

Welcome to the Amphetamine Age! The most popular and widely used "smart drugs" are the classic psychostimulants: amphetamines (often prescribed under the name Adderall), methylphenidate (also known by its brand name Ritalin), and modafinil (used for sleep disorders).

In this excellent article for Mosaic, Marek Kohn wonders if we are asking the right questions about these drugs - some of which we prescribe to our children.

Smart and smarter drugs

Are we asking the right questions about smart drugs? Marek Kohn looks at what they can do for us – and what they can’t.

Marek Kohn | 29 July 2014


© Mari Kanstad Johnsen

“You know how they say that we can only access 20 per cent of our brain?” says the man who offers stressed-out, blank-screened ‘writer’ Eddie Morra a fateful pill in the 2011 film Limitless. “Well, what this does, it lets you access all of it.” Morra, played by Bradley Cooper, is instantly transformed into a superhuman by the fictitious drug NZT-48. Granted access to all cognitive areas, he learns to play the piano in three days, finishes writing his book in four, and swiftly makes himself a millionaire.

Limitless is what you get when you flatter yourself that your head houses the most complex known object in the universe, and run away with the notion that it must have powers to match. More down to earth is the idea that we always have untapped cognitive potential, but that life gets between us and the best we could possibly manage.

Most people’s best days still leave them wondering what might have been. Life is interference, acute and chronic: the broken night’s sleep, the replayed arguments with our nearest and dearest, the suspected slight from a colleague, the mortgage, middle age, the buzzing fly. This is what preoccupation means. Noise, alarms and gnawing unease all occupy the cortex and commandeer its resources, leaving the brain short of space for other demands.

Even small differences in cognitive performance can make a world of difference – between a good CV and an outstanding one, between a second-class degree and a first, and between a winner and an also-ran. According to widespread reports, some students recognise this by using drugs to enhance their performance, particularly ahead of exams or coursework deadlines. How many of them are doing so is unknown: it may be fewer than you would think from reading both mainstream media coverage and scientific journals, but it’s undoubtedly going on.

It’s also been suggested that some students are taking cognitive enhancement drugs on into their professional lives after they graduate – in a report in New York magazine, for example, which dubbed the wake-promoting agent modafinil ‘the real Limitless drug’.

The drugs concerned are the ‘classic’ psychostimulants: amphetamines (often prescribed under the name Adderall) and methylphenidate (also known by its brand name Ritalin) – both extensively prescribed to children and young adults for the burgeoning diagnosis of attention deficit hyperactivity disorder (ADHD) – as well as modafinil, which is indicated for sleep disorders, including those produced by shift work.

None of these drugs are new. The performance-enhancing effects of amphetamine were reported as far back as the 1930s, among adolescent boys taking the Stanford achievement test. Even the youngest of these drugs, modafinil, was first synthesised in the 1970s, when the term ‘nootropics’ was coined to define a class of drugs that improves the mind. And yet cognitive enhancement drugs are usually depicted as a distinctly contemporary phenomenon, with the implication that more of them are down the road, offering new capacities and increasing ethical challenges.

§

When scientists talk about cognitive enhancers today, they are often discussing drugs that mitigate the effects of the dementias and other cognitive disorders, whether they are new candidates or ones already in use such as donepezil and galantamine. Their aim is to recover function or reduce impairment, not improve on healthy levels – although, as populations age, dementias and other cognition disorders will climb healthcare priority lists, and the drugs developed to treat them may also turn out to aid cognition among healthy people, young and old.

By contrast, when futurists and ethicists talk about ‘smart drugs’ or cognitive enhancement, they tend to mean reaching levels of performance that were previously unattainable even under ideal conditions or acquiring new kinds of mental capability altogether.

One scientist who is eager to peer at the horizon is Gary Lynch, a professor in the School of Medicine at the University of California, Irvine. What excites him is what he sees as “the ultimate description of enhancement”, the production of new capacities. “I’m interested in [the] capability to do things you can’t do now, thoughts that you can’t think, ideas that you can’t form.” He suggests extreme memory enhancement as an example of something you can’t do now: the concerted boosting of attention, learning and memory could enable you to repeat a conversation verbatim or do mental maths at a far higher level than normal.

Thoughts that can’t be thought and ideas that can’t be formed are, by nature, difficult – if not impossible – to imagine. “It’s at the fringe; it’s beyond current cognitive science,” Lynch admits. For the time being, we remain in the Amphetamine Age of cognitive pharmacology.

§

Cognition is a suite of mental phenomena that includes memory, attention and executive functions. Executive functions are not clearly defined, but you know them when you see them. They occupy the higher levels of thought: reasoning, planning, directing attention to information that is relevant (and away from stimuli that aren’t), and thinking about what to do rather than acting on impulse or instinct. You activate executive functions when you tell yourself to count to ten instead of saying something you may regret. They are what we use to make our actions moral and what we think of when we think about what makes us human. Any candidate cognition drug would have to enhance executive functions to be considered truly ‘smart’.

These are quite abstract concepts, though. There is a large gap, a grey area in between these concepts and our knowledge of how the brain functions physiologically – and it’s in this grey area that cognitive enhancer development has to operate. Amy Arnsten, Professor of Neurobiology at Yale Medical School, is investigating how the cells in the brain work together to produce our higher cognition and executive function, which she describes as “being able to think about things that aren’t currently stimulating your senses, the fundamentals of abstraction. This involves mental representations of our goals for the future, even if it’s the future in just a few seconds.”

At the front of the brain is the prefrontal cortex. This is the zone that produces such representations, and it is the focus of Arnsten’s work. “The way the prefrontal cortex creates these representations is by having pyramidal cells – they’re actually shaped like little pyramids – exciting each other. They keep each other firing, even when there’s no information coming in from the environment to stimulate the circuits,” she explains.

Several chemical influences can completely disconnect those circuits so they’re no longer able to excite each other. “That’s what happens when we’re tired, when we’re stressed.” Drugs like caffeine and nicotine enhance the neurotransmitter acetylcholine, which helps restore function to the circuits. Hence people drink tea and coffee, or smoke cigarettes, “to try and put [the] prefrontal cortex into a more optimal state”.

In a broad sense, it’s enhancement; in a stricter one, it’s optimisation. “I think people think about smart drugs the way they think about steroids in athletics,” Arnsten says, “but it’s not a proper analogy, because with steroids you’re creating more muscle. With smart drugs, all you’re doing is taking the brain that you have and putting it in its optimal chemical state. You’re not taking Homer Simpson and making him into Einstein.”

What’s more, the brain is complicated. In trying to upgrade it, you risk upsetting its intricate balance. “It’s not just about more, it’s about having to be exquisitely and exactly right. And that’s very hard to do.”

Scientists are frequently reminded of the difference between ‘more’ and ‘right’ when they administer cognitive enhancers. Methylphenidate improves working memory in rats performing tasks that involve the prefrontal cortex, but only in a narrow range of doses. The graphs rise, level off and drop, tracing a path from ‘not enough’ to ‘too much’ in the shape of an inverted ‘U’. Outside the lab, this point can be illustrated by comparing the effects of the first coffee of the day with those of the second or third.

A drug’s scope for enhancement may also be compromised by differences in optimal doses among the various circuits it affects. “What’s good for one system may be bad for another system,” says Trevor Robbins, Professor of Cognitive Neuroscience at the University of Cambridge. And it may be bad for the system as a whole.

“It’s clear from the experimental literature that you can affect memory with pharmacological agents, but the problem is keeping them safe,” Robbins observes, “because this inverted-U-shape issue does give you the problem of possible epilepsy, convulsions and so forth.”

§

The defining cognitive challenge of modern life is how to divide attention efficiently among multiple tasks and stimuli: not just how to concentrate, but how to compartmentalise. It’s about switching rapidly and smoothly between tasks, keeping the unresolved material from each to hand while the processor swivels round to the next. It’s the difference between the classical ideal of scholarship, of unqualified absorption in a single theme, and the reality of mental operations in a multiple-choice world where we are constantly beset by competing bids for our attention.

“Those two types of attention are really in opposition to each other,” says Barbara Sahakian, Professor of Clinical Neuropsychology at the University of Cambridge. The implication is that if you enhance focused attention, it will be at the expense of divided attention, and vice versa.

However, Martin Sarter, a professor at the University of Michigan, sees it differently. According to Sarter, “pretty much everybody” in the field agrees that we deal with multiple tasks by ‘time-sharing’, tackling “one task at a time and using more or less complicated scripts to flip between tasks. This comes down to working memory plus focused attention.”

Increasing focus, Sarter argues, increases the amount of work the brain gets done on a task before it switches to another, and thus reduces the amount of unfinished material from the task that has to be held in working memory until its turn comes round again. A drug that enhances focused attention will lower demands on both working memory and the control systems that monitor and manage the tasks in hand.

“That, we understand a bit,” says Sarter. “How to enhance working memory capacity or executive control independently, I don’t think we do understand, but that would be a neat trick.”

§

People have known for a long time that stimulants can make users warm to their tasks. In 1916, when a man named Horace Kingsley was arrested in a pub on England’s south coast for selling cocaine to soldiers, the authorities charged him with ‘selling a powder to members of His Majesty’s Forces, with intent to make them less capable of performing their duties’. On the contrary, he argued: “It makes you most keen on what you are doing.”

Although it doesn’t produce the buzz that hedonistic drug-takers pursue, modafinil may have other, more subtle attractions. Researchers at the University of Cambridge found it increased people’s enjoyment of the cognition tests they were set, without improving their general mood.

“Under placebo, there’s not much pleasure there at all, but under modafinil suddenly these tests seem very pleasurable,” remarks Sahakian. Performance in planning and working memory improved, too. Sahakian considers modafinil a true cognition enhancer, enabling young and healthy people to perform better on difficult tasks than when they are given a placebo.

Other scientists are sceptical about whether any of these drugs enhance cognition directly, rather than by improving the user’s state of mind. “I’m just not seeing the evidence that indicates these are clear cognition enhancers,” says Sarter, who thinks they may be achieving their effects by relieving tiredness and boredom. “What most of these are actually doing is enabling the person who’s taking them to focus,” says Steven Rose, emeritus professor of life sciences at the Open University. “It’s peripheral to the learning process itself.”

It may, however, be central to the person’s experience of what the learning experience feels like. Judging by accounts such as those gathered at an elite (unnamed) American university by researcher Scott Vrecko and published in 2013, the magic of cognitive enhancers lies in their ability to make study a pleasure. They overcome lethargy, reluctance and lack of confidence.

“I’ll get out my books, laptop, and stuff, but even that can be a challenge,” a student called Sarah told Vrecko. But when the Adderall takes effect, “all of a sudden I’ll just be like, ‘Oh wait. I can do this.’”

The doors of engagement open, as described by another student: “I remember getting just completely absorbed in one book, and then another, and as I was writing I was making connections between them [and] actually enjoying the process of putting ideas together. I hadn’t had that before.”

These students did not see their drug use as anything more than the removal of things that got between them and studying. They didn’t think drugs made them smarter. Yet even so, it would be unwise to assume that the effects were as impressive as their users thought they were.

As the psychologist Derek Russell Davis drily observed back in 1947, “the subject who has taken amphetamine usually judges the effects more favourably than the experimenter”. By way of illustration, he recalled how “a research colleague, left to his own devices after a dose of amphetamine, spent a morning preparing with great thoroughness a grandiose research-plan, of which he would never find time to carry out even a quarter.”

One finding from a 2010 review of research that may come as a surprise to students who trust stimulants is that methylphenidate does not enhance attention and may even interfere with it. A recent study of Adderall at the University of Pennsylvania showed the drug failed to significantly affect cognition in healthy young adults – although those who took it mostly believed that it had.

In this kind of drug-taking, sensation isn’t the goal but the effect of pursuing other goals. Recognising it as a distinct form of drug use – for neither medication nor recreation, but for application – raises several questions: one is whether these drugs are effective over sustained periods (the 2010 review of work on modafinil and methylphenidate found only two studies for each drug that looked at the effects of repeated doses, and the longest of those lasted just six weeks), and another is what effects they might have on their users’ health. Sahakian emphasises the need for a long-term study “to determine whether these cognitive-enhancing drugs are safe for healthy people to use,” adding that “our brains are in development into late adolescence and even young adulthood, [so] these safety concerns are particularly great for young, healthy people.”

§

Drugs and catastrophe are seemingly never far apart, whether in laboratories, real life or Limitless. Downsides are all but unavoidable: if a drug enhances one particular cognitive function, the price may be paid by other functions. To enhance one dimension of cognition, you’ll need to appropriate resources that would otherwise be available for others.

“There are costs to narrowing your attention,” Sarter points out. “Not only all the stuff in the periphery that might be very significant that you might be missing, but internally – if you narrow your attentional field, it also narrows the range and scope of associations you could bring into your thought process.”

In many settings that could well prove costly – but in others, where you’re not being asked to think about the meaning of life, it could be beneficial. The inability to attend to one’s internal network of associations would be desirable in an air traffic controller, for example.

If paying Paul always requires robbing Peter, we can’t expect drugs to produce a general, cortex-wide expansion of cognition. But by allocating extra resources to one domain or the other, could you surpass the maximum levels you could previously have attained or even the highest levels attained by anyone?

“I think you can and you will,” says Sarter, “but you will do so with respect to very defined functions within very defined task contexts.”

For example, one of cognitive psychology’s most famous findings is that people can typically hold seven items of information in their working memory. Could a drug push the figure up to nine or ten? “Yes. If you’re asked to do nothing else, why not? That’s a fairly simple function.”

§

Scientists’ opinions differ on the prospects for progressing beyond the Amphetamine Age. Rose thinks that because most drugs work by affecting multiple brain processes, the idea of a pure ‘nootropic’ that very specifically affects coding is implausible and has “long gone by the board”. At the other end of the neuro-optimism scale, Lynch says “we are very close to being able to allow people to encode better”.

Lynch argues that recent advances in neuroscience have opened the way for the smart design of drugs, configured for specific biological targets in the brain. “Memory enhancement is not very far off,” he says, although the prospects for other kinds of mental enhancement are “very difficult to know… To me, there’s an inevitability to the thing, but a timeline is difficult.”

Lynch speaks after spending many years in an ultimately unsuccessful bid to develop a class of molecules called ampakines as a treatment for Alzheimer’s disease. “The ampakines have been around for quite a while,” he acknowledges. “They’ve gone into trials on ADHD; they’ve been in trials on memory. The problem has always been [that] there are side-effects.”

Echoing Robbins’s caveat about convulsions, Lynch draws a somewhat alarming lesson from his experience for researchers seeking new drugs to organise larger cognitive networks within the cortex: “The trick is not just to expand the networks, but to expand the networks without increasing the likelihood of seizures or some kind of psychosis. That may, in fact, be the most difficult part of the problem.”

Lynch points to nicotinic receptor agents – molecules that act on the neurotransmitter receptors affected by nicotine, without necessarily being related to nicotine itself – as ones to watch when looking out for potential new cognitive enhancers. So does Sarter, who also emphasises the importance of basing cognitive enhancer research on neurobiological knowledge. A class of agents known as α4β2* nicotinic receptor agonists seem to act on mechanisms that control attention, Sarter says, “and to do so in a very orderly fashion that maps them to the neurobiology.” Among the currently known candidates, he believes they come closest “to fulfilling the criteria for true cognition enhancers.”

He is downbeat, however, about the likelihood of the pharmaceutical industry turning them into products. Its interest in cognitive enhancers is shrinking, he says, “because these drugs are not working for the big indications, which is the market that drives these developments. Even adult ADHD has not been considered a sufficiently attractive large market.”

A substance called piracetam was once widely touted as a smart drug, as Rose recalled in a commentary piece published in 2002. Piracetam still has its enthusiasts, but its name is now mostly a reminder that candidate drugs come and go. “There have been a lot of clinical trials for a lot of substances that didn’t do anything,” observes Sarter.

Frustrated by the lack of results, pharmaceutical companies have been shutting down their psychiatric drug research programmes. Traditional methods, such as synthesising new molecules and seeing what effect they have on symptoms, seem to have run their course. A shift of strategy is looming, towards research that focuses on genes and brain circuitry rather than chemicals. The shift will prolong the wait for new blockbuster drugs further, as the new systems are developed, and offers no guarantees of results.

Lynch, Sarter and the pharmaceutical industry all agree that developing smarter drugs will require smarter science. A few new drugs (perhaps nicotinic receptor agonists, as Lynch and Sarter suggest) might emerge in the current system, but to find out what's possible beyond that will need a reinvented research programme. For real success, research needs to show what these drugs can do at the level of systems neuroscience and to establish systematic relationships between drug effects on circuits, receptors, behaviour and cognitive operations.

§

In the meantime, with no end to the Amphetamine Age in sight, smarter answers are needed for the unanswered questions about the drugs people already take in the hope of enhancing their cognitive powers – questions about whether they work, how they work, whether they work differently in people with different gene variants, the effects they have on the mind after their initial novelty has worn off, and the effects they may have on our health and wellbeing in the long term.

Despite decades of study, a full picture has yet to emerge of the cognitive effects of the classic psychostimulants and modafinil. Recent reviews indicate that they may help to lay down long-term memories and perhaps help keep information present to hand in working memory. They may also enhance ‘cognitive control’, the ability to adapt behaviour in changing conditions, particularly in people whose powers of cognitive control are modest to start with.

Part of the problem is that getting rats, or indeed students, to do puzzles in laboratories may not be a reliable guide to drugs’ effects in the wider world. Drugs have complicated effects on individuals living complicated lives. Determining that methylphenidate enhances cognition in rats by acting on their prefrontal cortex doesn’t tell you the potential impact that its effects on mood or motivation may have on human cognition.

It may also be necessary to ask not just whether a drug enhances cognition, but in whom. Researchers at the University of Sussex have found that nicotine improved performance on memory tests in young adults who carried one variant of a particular gene but not in those with a different version. In addition, there are already hints that the smarter you are, the less smart drugs will do for you. One study found that modafinil improved performance in a group of students whose mean IQ was 106, but not in a group with an average of 115.

There are smarter questions to ask about fairness and cognition-affecting drugs. So far, the ethical anxieties have revolved around elite competition: whether students who take drugs to enhance performance are cheating, and whether they will put pressure on their peers to do likewise to avoid being at a competitive disadvantage. But attention is not just a problem for the minority who reach higher education or certain professions.

In their book Scarcity: Why having too little means so much, Sendhil Mullainathan and Eldar Shafir describe how they dumbed people down by inducing them to think about the cost of living. Recruiting shoppers from a New Jersey mall, they prefaced cognition tests with a hypothetical question that invited respondents to imagine they had to get their cars serviced. They also asked the shoppers to disclose their household incomes. When the price of the service was given as $300, the scores of rich and poor were indistinguishable. When it was $3,000, the poorer shoppers scored worse; in fact, their scores were worse than those of people who did similar tests after a night without sleep. Their results implied a drop in IQ of 13 or 14 points, the difference between average and ‘borderline deficient’ intelligence.

Mullainathan and Shafir argue that the increase in the imaginary cost triggered a reallocation of mental capacity among those for whom such a sum would be a serious problem in real life. It activated thought processes that would not shut off, reducing the computational power available to process the intelligence tests. If that is what a hypothetical problem can do, the effects of poverty and money worries in the real world must be a cognitive scandal of staggering proportions.

Mullainathan and Shafir’s work points towards a bigger picture of fairness in cognitive enhancement. One message that has emerged from the research so far is that cognition-affecting drugs do more for lower performers than high-fliers and that they can offset disadvantages, such as lack of sleep. Drugs that promote concentration might help poor people in their efforts to better themselves – studying at night school while fatigued from long hours of labour, for example – or, if Sarter is right about how improving focused attention can make it easier to deal with multiple demands, in coping with bills that outnumber earnings.

It’s certainly better to enhance cognitive performance through healthy living, fitness and educational opportunities than by taking pills. But we also have to recognise that it is far harder for the poor to achieve best cognitive practice than the rich. The question of whether drugs could help people get out of poverty, by offsetting its cognitive impact on them, might actually be the smartest question we can ask about smart drugs.

Sunday, March 09, 2014

In Conversation with… Steven Pinker (via Mosaic)

Mosaic is a new open access online science magazine produced by the Wellcome Trust. In the first collection of stories they have posted, one is an interview with the always interesting (and sometimes infuriating) Steven Pinker of Harvard University - and author of How the Mind Works (1997) and The Blank Slate: The modern denial of human nature (2002) [both nominated for the Pulitzer Prize], and The Better Angels of Our Nature: Why Violence Has Declined (2011).

Those who follow Pinker's writing will find little new here, but for those who do not this article/interview provides an excellent overview of the man and his career.

In Conversation with… Steven Pinker

Oliver Burkeman explores human nature, violence, feminism and religion with one of the world’s most controversial cognitive scientists. Can he dent Steven Pinker’s optimism?

March 4, 2014

Stephen Pinker holding a piece of the Berlin Wall

In the week that I interview the cognitive psychologist and bestselling author Steven Pinker in his office at Harvard, police release the agonising recordings of emergency calls made during the Sandy Hook school shootings. In Yemen, a suicide attack on the defence ministry kills more than 50 people. An American teacher is shot dead as he goes jogging in Libya. Several people are killed in riots between political factions in Thailand, and peacekeepers have to be dispatched to the Central African Republic.

In short, it’s not hard to find anecdotes that seem to contradict a guiding principle behind much of Pinker’s work – which is that science and human reason are, slowly but unmistakably, making the world a better place.

Repeatedly during our conversation, I seek to puncture the silver-haired professor’s quietly relentless optimism. If the ongoing tolls of war and violence can’t do it, what about the prevalence in America of unscientific beliefs about the origins of life? Or the devastating potential impacts of climate change, paired with the news – also released in the week we meet – that 23 per cent of Americans don’t believe it’s happening, up seven percentage points in just eight months?

I try. But it proves far from easy.

At first glance Pinker’s implacable optimism, though in keeping with his sunny demeanour and stereotypically Canadian friendliness, presents a puzzle. His stellar career – which includes two Pulitzer Prize nominations for his books How the Mind Works (1997) and The Blank Slate: The modern denial of human nature (2002) – has been defined, above all, by support for the fraught notion of human nature: the contention that genetic predispositions account in hugely significant ways for how we think, feel and act, why we behave towards others as we do, and why we excel in certain areas rather than others.

This has frequently drawn Pinker into controversy – as in 2005, when he offered a defence of Larry Summers, then Harvard’s President, who had suggested that the under-representation of women in science and maths careers might be down to innate sex differences.

“The possibility that men and women might differ for reasons other than socialisation, expectations, hidden biases and barriers is very close to an absolute taboo,” Pinker tells me. He faults books such as Lean In, by Facebook’s chief operating officer, Sheryl Sandberg, for not entertaining the notion that men and women might not have “identical life desires”. But he also insists that taking the possibility of such differences seriously need not lend any justification to policies or prejudices that exclude women from positions of expertise or power.

“Even if there are sex differences, they’re differences in the means of two overlapping populations, so for any [stereotypically female] trait you care to name, there’ll be many men who are more extreme than most women, and vice versa. So as a matter of both efficiency and of fairness, you should treat every individual as an individual, and not prejudge them.”

It is generally assumed that anyone who takes human nature seriously will be a fatalist, and probably politically conservative. If we’re pre-wired to be how we are, the reasoning goes, we might as well accept it and give up on hopes of any change. One way of interpreting Pinker’s most recent book, The Better Angels of Our Nature, is as an 800-page doorstopper of a riposte to this idea. Not only can we change, but when it comes to arguably the most important measure of improvement – the violence we inflict on each other – we actually have changed, to an almost incredible degree.


“I had very often come across the objection that if human nature exists – including some ugly motives like revenge, dominance, greed and lust – then that would imply it’s pointless to try to improve the human condition, because humans are innately depraved,” says the 59-year-old, whose distinctive appearance – today he is sporting black cowboy boots – frequently gets him stopped in the street. “Or there’s an alternative objection: that we ought to improve our lot, and therefore, it cannot be the case that human nature exists.”

Pinker puts all this down to “a fear that acknowledging human nature would subvert any attempt to improve the human condition”. Better Angels argues that this is a misunderstanding of what human nature means. It shouldn’t be identified with a certain set of behaviours; rather, we have a complex variety of predispositions, violent and peaceful, that can be activated in different ways by different environments. The book’s title, drawn from Abraham Lincoln’s first inaugural address, is “a poetic allusion to the parts of human nature that can overcome the nastier parts,” he explains.

But Better Angels is notable above all for the sheer weight of evidence it amasses, culled from forensic archaeology, government statistics, town records, and studies by ‘atrocitologists’ of historical genocides and other mass killings. The book demonstrates that homicides, calculated as a proportion of the world’s population at any given point, have plummeted; when you look at the numbers this way, World War II wasn’t the worst single atrocity in history, but more like the tenth.

Pinker dwells, in sometimes unnerving detail, on horrifying methods of torture once considered routine. “The Heretic’s Fork had a pair of sharp spikes at each end,” he writes, in what is definitely not the most appalling passage. “One end was propped under the victim’s jaw and the other at the base of his neck, so that as his muscles became exhausted he would impale himself in both places.”

“Human nature or no human nature,” Pinker says, “it’s just a brute fact that we don’t throw virgins into volcanoes any more. We don’t execute people for shoplifting a cabbage. And we used to.”

He offers a multi-pronged explanation for this decline, from the rise of the state and of cities, to literacy, trade and democracy. Whether this constitutes an across-the-board endorsement of scientific rationality may be debated. (“Like other latter-day partisans of ‘Enlightenment values’,” the critic John Gray wrote, “Pinker prefers to ignore the fact that many Enlightenment thinkers have been doctrinally anti-liberal, while quite a few have favoured the large-scale use of political violence”.) But it’s hard to question the basic finding that your chances of meeting a sticky end, all else being equal, are vastly lower in 2014 than they were in 1014.

If Pinker’s message has proved hard for some to swallow, that may be because our standards are improving even faster than our actual behaviour, giving the misleading impression that things are getting worse. “Hate attacks on Muslims are deplorable, and they ought to be combated, and it reflects well that we’re concerned when they do occur,” Pinker says. “But by the standards of past pogroms and ethnic cleansings, they’re in the noise: this is not a phenomenon of the same magnitude as the ethnic expulsions of decades past.”

We’ve even witnessed the emergence of whole new categories of condemnable acts. Take bullying, says Pinker: “The President of the United States gave a speech denouncing bullying! When I was a child, this would have been worthy of satire.” As we continue to construct a social environment that activates more and more of our peaceable dispositions, and fewer and fewer of our aggressive ones, the remaining instances of bad behaviour stick out like ever-sorer thumbs.

What’s more, evolutionary psychology, one of Pinker’s several specialisms, can explain why. For reasons that long ago made excellent sense, our brains are adapted to focus on bad news over good, vivid threats over vague ones, and recent horrors over historically distant atrocities. Our elevated levels of anxiety about the future might actually be a sign of reason’s triumph.

“It could be interpreted as a sign of our growing up,” Pinker says. “We worry about more things, because we know that there are more things to worry about. Every time we go to a restaurant, we worry we might be ingesting saturated fats, or carcinogens. For my parents’ generation, the main concern about food was: ‘Does it taste delicious?’”

§

Many of Pinker’s most ambitious ideas about science and human morality have their origins in a seemingly trivial observation about irregular verbs. Building on the groundbreaking linguistic ideas of Noam Chomsky, Pinker proposed that certain simple language errors committed by young children “capture the essence of language” itself.

When a three-year-old says “I eated the ice cream” or “we holded the kittens”, she is, Pinker observes, following a grammar rule correctly, and making a mistake only because we happen to suspend the rule for those verbs in English. Since she couldn’t have learned “eated” or “holded” by simply imitating adult speakers, this points to the presence of innate cognitive machinery – a “language instinct”, to quote the title of Pinker’s 1994 book – that enables a young child to construct novel linguistic forms by following rules.

(Irregular verbs have an even more intimate role in Pinker’s life: he met his wife, the philosopher Rebecca Goldstein, through an email exchange after he mentioned her correct use of the past participle ‘stridden’ in his book Words and Rules.)

Years later, in 2007’s The Stuff of Thought, he extended this reasoning to the structures of “mentalese”, the wordless “language of thought” that he argues we use to think in. When, for example, we use spatial language to talk about time – as in “a long day”, or bringing a meeting “forward” – might we be relying on an in-built, pre-linguistic tendency to think about the abstract notion of time by analogy to space, something far more concretely graspable to an early human concerned with food, shelter and survival?

This view of the mind – as a set of modules evolved to confront specific cognitive challenges on the Pleistocene savannah – is most ambitiously on display in How the Mind Works, a dazzling effort to “reverse engineer” all of our mental capacities, asking for what purposes each might have been selected. Love, humour, war, jealousy, the disgust we feel at the idea of eating certain animals but not others, religous food taboos, compulsive lying: none of them escape the blade of Pinker’s rationalist scalpel.

Assuming you buy the book’s general approach, it is almost impossible after reading it to cling to the romantic notion that there might be more to our inner lives than the brute facts of biology and natural selection. The notable exception is how the brain causes sentience, or conscious awareness: after a long discussion on the topic, Pinker finally concludes: “Beats me!” There’s reason to believe, he argues, that humans may simply lack the mental capacity ever to solve the mind–body problem.

But the broader philosophical question – how far science can, or should, reach into the life of the mind – got a disputatious airing last year, when Pinker wrote an essay for the New Republic entitled ‘Science is not your enemy’. It was motivated in part by reports on both sides of the Atlantic about declining student enrolments in humanities subjects, and was Pinker’s intervention in the long-running debate over ‘scientism’: are science and scientists guilty of attempting to colonise areas of intellectual life where they don’t belong?

Rather than denying that this was a real phenomenon, as numerous scientists have, Pinker audaciously claimed it was a good thing – providing you defined ‘scientism’ correctly. Humanities scholars had themselves to blame, he implied, for the decline of their fields: by insisting on remaining inside departmental silos, resistant to new approaches, they’d helped guarantee their growing irrelevance. Science was not engaged upon “an imperialistic drive to occupy the humanities,” he wrote. “The promise of science is to enrich and diversify the intellectual tools of humanistic scholarship, not to obliterate them.”

In a furious response, entitled ‘Crimes against humanities’, the New Republic’s literary editor, Leon Wieseltier, accused Pinker of denying the very possibility of valid yet non-scientific knowledge. How absurd, he argued, to imagine that a scientific analysis of a painting – a chemical breakdown of its pigments and textures, and so on – could be the only thing worth saying about it. Pinker calls this a “paranoid” interpretation of his argument. “How could an understanding of perception of colour, of form, of lighting, of shading, of content such as faces and landscapes not enrich our understanding of art?”

Yet if Wieseltier’s retort was overheated, he may still have a point. Pinker wasn’t – and isn’t – merely calling on scholars from different disciplines to talk to each other more. His argument is that any scholar committed to the idea that “the world is intelligible” is doing science. “The great thinkers of the Age of Reason and the Enlightenment were scientists,” he wrote, naming various philosophers.

It seems to follow from this that non-scientific scholarship doesn’t help make the world intelligible, but Pinker will have none of it. “I’m married to a humanities scholar. I collaborate with humanities scholars. I’m in fields like linguistics, where deans don’t know whether it’s the humanities or not,” he says. “Many humanities scholars – particularly here at Harvard and MIT, but elsewhere too – find it very exciting that there might be new ways of approaching old problems, and an influx of new ideas. I mean, who in their right mind could defend insularity as a principle for excellence in anything?”

Of course there are ways of studying a painting, he concedes, that result in worthwhile insights that can’t be described as scientific. But, he tells me, “I think the humanities would do themselves a favour by not insisting on staying in a silo. If they are wanting to attract the smartest minds from the next generation, it would be wise to hold out the promise that there will be new ways of understanding things – the same expansive mindset that attracts smart, ambitious people to the sciences could also attract them to the humanities.

“That it isn’t just a question of reinterpreting the same works of art, with the same methods, over and over again,” he concludes. “I don’t see why humanistic scholarship can’t make progress. Wieseltier seemed to insist that it can’t, but I don’t think most people in the humanities would agree with this. He claims to speak for the humanities. But I can imagine a lot of people in the humanities saying: ‘Speak for yourself!’”

§

Pinker was born in 1954 in Montréal, and raised in that bilingual city’s English-speaking Jewish community (his sister, Susan, is also a psychologist, of the clinical rather than research variety). It’s tempting to try to attribute his subsequent intellectual interests to the milieu of his upbringing. Did his focus on language emerge from having grown up in a linguistic battleground? Did his conception of the mind as a complex assemblage of modules, each designed for specific purposes, arise from inspecting the machines his grandfather used as a garment manufacturer? Was it growing up in the 1960s, when many progressives embraced a ‘blank slate’ model of humans as a precondition for radical change, that prompted him to rebel against that notion in his work?

Such speculation can be hazardous when it concerns an evolutionary psychologist who believes that genetic heritage is more important than parenting or peer-group influence. But how far, really, does Pinker believes his career trajectory was influenced by his genes, and how far by his upbringing?

“There are parallel universes to this one [in which] I wouldn’t have written The Better Angels of Our Nature or The Language Instinct,” he says. “But I’d probably be in the sciences of something human. I probably wouldn’t have been a physicist: I’m too much of a yenta, too interested in humans.” On the other hand, “I probably wouldn’t have been a literary critic.”

In this universe, Pinker studied experimental psychology at McGill University in Montréal, then did his PhD in the same field at Harvard; he has spent the rest of his career there and at MIT, just down the street.

Wherever he got them from, Pinker’s dispositions include a prodigious appetite for work. As well as being a self-confessed micromanager in his teaching work, as Harvard’s Johnstone Family Professor of Psychology, he’s usually either pursuing a full schedule of research, speaking and article-writing, or plunging into months-long marathons of book-writing.

“When I write a book, it’s almost all-consuming,” he says, recalling the year he spent in his house on Cape Cod writing The Better Angels, seven days a week, and sometimes until three in the morning. (He’d spent the previous year doing little but reading in preparation for it.) “I do try to exercise. I try to spend some time being a human being with my wife” – as recreation, he and Goldstein ride a tandem bicycle and paddle a tandem kayak. “Fortunately, she’s also a very intense writer, so she sympathises.”

The couple do not have children, a fact Pinker sometimes uses to illustrate the non-determinative nature of genetic predispositions. (He might be predisposed, thanks to natural selection, to reproduce, but he’s used his frontal lobe, a crucial part of his evolutionary inheritance, to decide not to.) “Some things have to give,” Pinker says. “I’m not on Facebook, I don’t see a whole lot of movies, I don’t watch much TV – not because I consider myself above TV, I just don’t have time. And I don’t have a whole lot of face-to-face meetings.” The Pinker–Goldstein house is sometimes almost silent, except for keyboard-tapping, for days and weeks on end.

Both partners are self-described, out-and-proud atheists. Yet while Pinker has received awards from atheist organisations for his support for their cause, it’s notable that he opts not to focus on religion, or its opponents, in his work. A Pinker book on the topic would surely have sold impressively, anointing him the fifth horseman of New Atheism – but “there’s just not enough intellectual content in there, at least on mind, for me to explore,” he says. “I think [Richard] Dawkins has done a fine job; I don’t think I have anything to add to that.”

Pinker’s relative lack of engagement in the modern wars over belief shouldn’t be taken as any endorsement of Stephen Jay Gould’s argument that religion and science are “non-overlapping magisteria”, each a legitimate domain of authority that should keep out of the other’s business. “As a matter of fact,” Pinker says, “religions have concerned themselves with the subject-matter of science… All the world’s major religions have origin myths, they have theories of psychology, of what animates a body that allows it to make decisions. And I think science has competed on that territory successfully: it has shown that those explanations are factually incorrect.”


Nor, in his eyes, should religion have any franchise on morality: “That’s not to say that morality is going to be determined by biology – it could be – but rather that it’s the subject-matter of secular moral philosophy.”

Does any kind of spirituality, however non-religiously defined, play a role in his life?

“I’m afraid of using the word ‘spiritual’,” he says. “I mean, I have a sense of awe and wonder – a sense of intellectual vertigo in pondering certain questions. I hesitate to use the word ‘spiritual’ just because it comes with so much baggage about the supernatural.”

Pinker’s next book, The Sense of Style, will be a style manual for writers incorporating insights from cognitive psychology and linguistics. For example, it will offer advice on how to get around “the curse of knowledge” – the difficulty writers face in being unable to place themselves in the mind of a reader who doesn’t already know as much as the writer knows. Or the question of how to relate to one’s imagined reader: insights from psychology, Pinker will argue, show that the appropriate metaphor to keep in mind is one of vision – that “the stance you take as a writer ought to be to pretend that you’re pointing out something in the world that your reader could see with his own eyes if only he were given an unobstructed view”.

To the extent that these, or any other findings, rely on explanations from evolutionary psychology, they’re vulnerable to a recurrent criticism: aren’t evolutionary psychologists guilty of simply constructing retrospectively satisfying ‘just-so stories’, with no way of showing whether or not they’re the truth?

In one memorable passage in How the Mind Works, Pinker suggests that our cultural tendency to reward successful executives (and Harvard academics) with high-floor offices might result from an adaptive preference for good views of the surrounding territory, the better to defend against attackers. But in an alternative world where we rewarded executives with offices in the basement, couldn’t you construct a mirror-image explanation, about the benefits of being able to hide out of sight?

For Pinker, the crucial question is whether a hypothesis can be tested. First, he says, you’d have to establish – by means of psychology experiments, or surveys of property prices – that there was indeed a culturally widespread, present-day preference for high floors with good views. Then you’d have to scour the historical evidence: for example, data from studies of “tribal warfare, on whether there’s been a historically continued preference for high vantage points over bunkers and burrows”. Sufficient data showing a preference through history and across cultures, and in contexts of life and death, might amount to good reason to accept your hypothesis.

Once more, Pinker navigates his way through my critique with ease. All attempts to puncture his unique brand of rational optimism – his confidence that careful scientific thinking, consistently applied, will carry humanity towards a future of reason, peace and flourishing – end in failure.

Even climate change, that archetypal case of humanity remaining inert in the face of scientific knowledge, doesn’t do it. “I think it would be foolhardy to say we’ll solve it, but I don’t think it’s foolhardy to say we can solve it,” Pinker says. “History tells us there have been cases in which the global community has adopted agreements to better collective welfare: the ban on atmospheric nuclear testing would be an example. The ban on commercial whaling. The end of piracy and privateering as a legitimate form of international competition. The banning of chlorofluorocarbons.”

In this domain as elsewhere, in Pinker’s judgement, science plus judicious optimism may yet win the day. Or, as he puts it: “We’re not on a trolley-track to oblivion.”