Pages

Saturday, June 07, 2014

Obituary: Alexander Shulgin: The Man Who Synthesized MDMA [Ecstasy]


The Daily Beast reran the obituary for Alexander Shulgin that first ran in The Telegraph (UK). Here is a brief overview of his life from Wikipedia:
Alexander Theodore "Sasha" Shulgin (June 17, 1925 – June 2, 2014) was an American medicinal chemist, biochemist, pharmacologist, psychopharmacologist, and author. Shulgin is credited with introducing MDMA (also known as "ecstasy") to psychologists in the late 1970s for psychopharmaceutical use. He discovered, synthesized, and personally bioassayed over 230 psychoactive compounds, and evaluated them for their psychedelic and/or entactogenic potential.

In 1991 and 1997, he and his wife Ann Shulgin authored the books PIHKAL and TIHKAL (standing for Phenethylamines and Tryptamines I Have Known And Loved), which extensively described their work and personal experiences with these two classes of psychoactive drugs. Shulgin performed seminal work into the descriptive synthesis of many of these compounds. Some of Shulgin's noteworthy discoveries include compounds of the 2C* family (such as 2C-B) and compounds of the DOx family (such as DOM). Due in part to Shulgin's extensive work in the field of psychedelic research and the rational drug design of psychedelic drugs, he has since been dubbed the "godfather of psychedelics"
If any person has ever earned the title of psychonaut, it was Shulgin.

The Week in Death: Alexander Shulgrin, Who Synthesized the Drug Ecstasy

An American chemist known as the ‘Godfather of Psychedelics,’ Alexander Shulgin originally promoted the drug now known as Ecstasy as an aid to talk therapy.

The Daily Beast | 06.07.14


ALEXANDER SHULGIN, JUNE 17, 1925—JUNE 2, 2014  Alexander Shulgin, who has died aged 88, was an American chemist known as the “Godfather of Psychedelics.” In his psychopharmacological studies, Shulgin used himself as a guinea pig to analyze human reactions to more than 200 psychoactive compounds. His experiments most famously introduced the empathogenic drug MDMA into the popular consciousness—under its street name, Ecstasy.

MDMA—known chemically as 3,4-methylenedioxy-N-methamphetamine but by Shulgin as a “low-calorie martini”—had originally been created as a blood-clotting agent in 1912. In the mid-‘70s, however, Shulgin synthesised (artificially concocted) the drug and took it himself, noting its beneficial effects on human empathy and compassion. Effectively Shulgin had created a “love drug.”

“I feel absolutely clean inside, and there is nothing but pure euphoria,” wrote Shulgin in his journals. “The cleanliness, clarity, and marvelous feeling of solid inner strength continued through the next day. I am overcome by the profundity of the experience.”

Shulgin and his friend Leo Zeff, a psychologist from California, promoted MDMA across America to hundreds of psychologists and therapists as an aid to talk therapy. One of those therapists who embraced the drug was the lay Jungian psychoanalyst Ann Gotlieb, who met Shulgin in 1979. The pair bonded over their interest in mind-altering substances and married two years later.

Alexander Theodore Shulgin (often known as Sasha) was born on June 17, 1925 in Berkeley, California. Both his parents were schoolteachers in Alameda County. Shulgin studied organic chemistry at Harvard as a scholarship student but dropped out in 1943 to join the U.S. Navy, and while serving during World War II he became interested in psychopharmacology. Prior to having surgery for a thumb infection he was handed a glass of orange juice, and, assuming that the crystals at the bottom of the glass were a sedative, he drank it and fell asleep. After the surgery he discovered that he had simply drunk fruit juice with added sugar and he had been given a placebo. He was, he said, amazed that “a fraction of a gram of sugar had rendered [him] unconscious.”

On leaving the Navy, Shulgin returned to Berkeley, where he earned a Ph.D. in biochemistry. He continued with postdoctoral work in psychiatry and pharmacology at the University of California before working in industry, first at Bio-Rad Laboratories and then as a senior research chemist at Dow Chemicals.

At Dow, he first started experimenting with mescaline. In the late ’60s he left the company to spend two years studying neurology at the University of California School of Medicine in San Francisco. He then built a lab—known as “the Farm”—behind his house and became an independent consultant.

During this period he developed ties with the Drug Enforcement Administration (DEA), giving seminars to agents on pharmacology and providing expert testimony in court. The administration granted him a licence for his analytical experiments, allowing him to synthesise illegal drugs.

Shulgin tested on himself hundreds of psychoactive chemicals, one of which was MDMA—the “emotional and sensual overtones” of which he soon extolled. He had first synthesised the drug in 1965, but took it himself only a decade later after an undergraduate from San Francisco State University described its effects.

The MDMA trials with therapists led Shulgin to Ann Gotlieb, whose father had been New Zealand’s consul to Trieste before World War II. The couple married in their back garden in a ceremony conducted by a DEA officer.

The benefits and dangers of MDMA have long been debated (it was made illegal in Britain in 1977 and in the U.S. in 1985). The debate accelerated as the drug was rebranded, and often dangerously recut, during the ’80s and ’90s to become the colourful little tablets known as Ecstasy, Molly, or simply “E.” For partygoers in raves across New York, London, and Ibiza, the drug was to become a byword for the elevations and crises inherent in clubbing.

Shulgin, however, maintained that the drug could help patients overcome trauma or debilitating guilt. He conceded that there had been “a hint of snake-oil” to its initial promotion, but insisted that it remained “an incredible tool.” He liked to quote a psychiatrist who described MDMA as “penicillin for the soul.”

Shulgin wrote hundreds of papers on his findings and several books, including the bestseller PIHKAL: A Chemical Love Story (1991), which he authored with his wife; the acronym stood for Phenethylamines I Have Known And Loved. A sequel, TIHKAL (Tryptamines I Have Known And Loved), followed two years later. “It is our opinion that those books are pretty much cookbooks on how to make illegal drugs,” said a spokesman for San Francisco’s division of the DEA. “Agents tell me that in clandestine labs that they have raided, they have found copies.”

Alexander Shulgin is survived by his wife.

Alexander Shulgin, June 17, 1925—June 2, 2014

Large-Scale Structure in Networks (Santa Fe Institute)


An interesting talk from the Santa Fe Institute on how understanding large-scale networks (the speaker, Mark Newman, works primarily with social networks) can help us understand complex systems.

What the large scale structure of networks can tell us about many kinds of complex systems

June 5, 2014 | Santa Fe Institute


Networks are useful as compact mathematical representations of all sorts of systems. SFI External Professor Mark Newman asks what the large-scale mathematical structures of networks can tell us.

Mathematical measures of network properties such as degree (a measure of average connectivity) and transitivity (a measure of second-order connectivity) are simple, often-used ways of understanding network structure at a local level.

Newman is interested in larger-scale structures of networks with thousands or millions of nodes. He reviews statistical techniques that offer such large-scale insights, as well as potential predictive capabilities.

His presentation took place during SFI's 2014 Science Board Symposium in Santa Fe.

Physicist Michio Kaku Explains Consciousness for You


Hmmm . . . a physicist explaining consciousness. Interesting leap from one field to another on the part of one of the most successful science writers for a public audience. I have his new book, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind, but I have not made time to read it.

Here is his "space-time" definition (theory) of consciousness:
consciousness is the set of feedback loops necessary to create a model of our place in space with relationship to others and relationship to time
That is nice, simple utilitarian view of consciousness, an attempt to answer the "what does it do?" question. But it fails to confront (and nothing below suggests he even addresses) the "hard problem" of consciousness, i.e., HOW does it arise, how does a 3.5 lb lump of tissue see and feel the color red? It does not address "why there is “something it is like” for a subject in conscious experience, why conscious mental states “light up” and directly appear to the subject."

His model also does not account for the existence of art or the desire to transcend waking states of consciousness. In the interview below, he addresses the "how" of humor and jokes, but not the "why" - why do we make humor and jokes, what purpose does it serve within a model of consciousness that is fundamentally about orienting us in space and time?

Perhaps I am being unfair, having not read the book, but what I see in the interview below does not inspire me to move the book up list of things to read.

Michio Kaku Explains Consciousness for You

The gregarious physicist gets inside our brains.

By Luba Ostashevsky and Kevin Berger | June 5, 2014

The first thing we asked Michio Kaku when he stopped by Nautilus for an interview was what was a nice theoretical physicist like him doing studying the brain. Of course the outgoing Kaku, 67, a professor at City College of New York, and frequent cheerleader for science on TV and radio, had a colorful answer. He told us that one day as a child in Palo Alto, California, when the hometown of Stanford University was punctuated by apple orchards and alfalfa fields, he was struck by an obituary of Albert Einstein that mentioned the question that haunted the twilight of the great physicist’s life: how to unify the forces of nature into a “unified field theory.” Kaku, who in 2005 published a book on Einstein, and is a proponent of string theory in physics, has devoted his entire career to solving Einstein’s conundrum. But along the way, Kaku said, he has been fascinated by the other great mystery of nature: the origin of consciousness. In his new book, The Future of the Mind, Kaku has turned the physicist’s “rigorous” eye on the brain, charting its evolution, transformations, and mutations, arriving at futuristic scenarios of human brains melded with computers to amplify collective memory and intelligence. We found the book insightful and engaging and were struck by the confidence with which Kaku explains the nature of consciousness. He answered our questions with zest and insight—stirring, we might imagine, controversy among neuroscientists.


What’s a nice theoretical physicist like you doing studying the brain?

Well, first of all, in all of science, if you were to summarize two greatest mysteries, one would be the origin of the universe and one would be the origin of intelligence and consciousness. And as a physicist, I work in the first. I work in the theory of cosmology, of big bangs and multiverses. That’s my world, that’s my day job, that’s how I earn a living. However, I also realize that we physicists have been fascinated by consciousness. There are Nobel Prize winners who argue about the question of consciousness. Is there a cosmic consciousness? What does it mean to observe something? What does it mean to exist? So these are questions that we physicists have asked ourselves ever since Newton began to create laws of physics, and we began to understand that we too have to obey the laws of physics, and therefore we are part of the equation. And so there’s this huge gap that physicists have danced around for many, many decades and that is consciousness. So I decided—I said to myself, “Why not apply a physicist’s point of view to understand something as ephemeral as consciousness?” How do we physicists attack a problem? Well, first of all we create a model—a model of an electron, a proton, a planet in space. We begin to create the laws of motion for that planet and then understand how it interacts with the sun. How it goes around the sun, how it interacts with other planets. Then lastly we predict the future. We make a series of predictions for the future. So first we understand the position of the electron in space. Then we calculate the relationship of the electron to other electrons and protons. Third we run the videotape forward in time. That’s how we physicists work. So I said to myself, “Why not apply the same methodology to consciousness?” And then I began to realize that there are three levels of consciousness: the consciousness of space, that is, the consciousness of alligators and reptiles; the consciousness of relationship to others, that is, social animals, monkeys, animals which have a social hierarchy and emotions; and third, we run the videotape forward, we plan, strategize, scheme about the future. So I began to realize that consciousness itself falls into the same paradigm when we analyze physics and consciousness together.

What is your “space-time theory of consciousness?”

Well, I’m a physicist and we like to categorize things numerically. We like to rank things, to find the inter-relationship between things, and then to extrapolate into the future. That’s what we physicists, that’s how we approach a problem. But when it comes to consciousness, realize that there are over 20,000 papers written on consciousness. Never have so many people spent so much time to produce so little. So I wanted to create a definition of consciousness and then to rank consciousness. So I think that consciousness is the set of feedback loops necessary to create a model of our place in space with relationship to others and relationship to time. So take a look at animals for example. I would say that reptiles are conscious, but they have a limited consciousness in the sense they understand their position in space with regard to their prey, with regard to where they live, and that is basically the back of our brain. The back of our brain is the oldest part of the brain; it is the reptilian brain, the space brain. Then in the middle part of the brain is the emotional brain, the brain that understands our relationship to other members of our species. Etiquette, politeness, social hierarchy—all these things are encoded in the emotional brain, the monkey brain at the center of the brain. Then we have the highest level of consciousness, which separates us from the animal kingdom. You see animals really understand space, in fact better than us. Hawks, for example, have eyesight much better than our eyesight. We also have an emotional brain just like the monkeys and social animals, but we understand time in a way that animals don’t. We understand tomorrow. Now you can train your dog or a cat to perform many tricks, but try to explain the concept of tomorrow to your cat or a dog. Now what about hibernation? Animals hibernate, right? But that’s because it’s instinctual. It gets colder, instinct tells them to slow down and they eventually fall asleep and hibernate. We, on the other hand, we have to pack our bags, we have to winterize our home, we have to do all sorts of things to prepare for wintertime. And so we understand time in a way that animals don’t.

Why is a sense of time a key to understanding consciousness?

Well, we’re building robots now right? And the question is how conscious are robots? Well, as you can see, they are at a level one. They have the intelligence of a cockroach, the intelligence of an insect, the intelligence of a reptile. They don’t have emotions. They can’t laugh and they can’t understand who you are. They don’t understand who they are. There’s no understanding of a social pecking order. And, well, they understand time to a degree but only in one parameter. They can simulate the future only in one direction. We simulate the future in all dimensions—dimensions of emotions, dimensions of space and time. So we see that robots are basically at level one. And then one day, we may meet aliens from out of space and then the question is, well, if they’re smarter than us, what does that mean to be smarter than us? Well, to me, it means being able to daydream, strategize, plan much better than us. They will be several steps ahead of us if they are more intelligent than us. They could, quote, outwit us because they see the future. So that’s where we differ from the animals. We see the future. We plan, scheme, strategize. We can’t help it. And some people say, “Well bah humbug! I don’t believe this theory, there’s got to be exceptions, things that are outside the theory of consciousness like humor.” What could be more ephemeral than a joke? But think about a joke for a moment. Why is a joke funny? A joke is funny because you hear the joke, and then you mentally complete the punch line by yourself, and then when the punch line is different from what you anticipated, it is, quote, funny, okay? For example one of Roosevelt’s daughters was the gossip of the White House and she was famous for saying, “If you have nothing good to say about somebody, then please sit next to me.” Now why is that quote funny? It’s funny because you complete the sentence yourself: if you have nothing good to say about somebody, then don’t say anything at all. Your parents taught you that. But then the twist is “well come sit next to me.” And that’s why it’s, quote, funny. Or WC Fields was asked the question, “Are you in favor of social activities for youth? Like, are you in favor of clubs for youth?” And he said, “Well am I in favor of clubs for youth? Yes, but only if kindness fails.” That’s funny because we think clubs are social gatherings, but for WC Fields he twists the punch line and says, no a club is for hitting people. And that’s why that quote is funny—because we cannot help it. We mentally complete the future.

You say we have a “CEO” in our brain. What exactly is that?

Well, how do we differ from the animals? If you put, for example, a mouse between pain and pleasure, between a shock and food, or between two pieces of food, I’m sorry, it will actually, like the proverbial donkey, get confused. It’ll go back and forth, back and forth because it cannot evaluate. It cannot do the ultimate evaluation of something. It lacks a CEO to make the final decision. We have the CEO. It’s in the frontal part of the brain and we can actually locate where our sense of awareness is. You put the brain in an MRI scan, you ask the person to imagine yourself, and bingo! Right there, right behind your forehead it lights up. That is where you have your sense of self. And then when you have to make hard decisions between two things, animals have a hard time doing that because they’re being hit with all these different kinds of stimuli. It’s a hard decision for them. We, on the other hand, again that part lights up and that is, quote, the CEO that finally makes the final decision in evaluating all the other consequences. And how did we do this? By simulating the future. If you get candy and put a candy in front of a kid the kid says, “Well if I grab that candy will my mother be happy? Will my mother be sad? I mean, how will I pay for it?” That’s what goes on in your mind, you complete the future and that’s the part of the brain that lights up. So that’s how the CEO makes the decision between two things while animals do it by instinct, or they just get confused.

Your “CEO in the brain” seems to act with intent and purpose. But neurons just fire or don’t. You can’t say they have purpose, right?

There is a purpose behind our consciousness, and that is basically survival and also reproduction. So if you think about your daydreaming, what do you daydream about? Well you daydream about survival first of all. Where’s my next food or my job? I mean, how do I impress people to advance in my career? And so on and so forth. And then you think, “Hey it’s Friday night. You know, I’m lonely. I want to go out and, you know, dance at some dance hall and have some fun.” So if you think about it, there is a purpose, and that’s why we have emotions. Emotions have a definite purpose. Evolution gave us emotions because they’re good for us. For example, the concept of like. How do you like something? Well if you think about it, most things are actually dangerous. Of all the things that you see around, they’re either neutral or actually dangerous. There’s only a small sliver of things which are good for you. And emotions say, “I like this because these things are good for you.” Jealousy is very important, for example, as an emotion because it helps to ensure your reproduction and the fact that your genes will carry out into the next generation. Anger. All these emotions that we have, that are instinctual, are basically hardwired into us because we have to make split-second decisions, which would take many, many minutes for the prefrontal cortex to rationally evaluate. We don’t have time for that. If you see a tiger, you feel fear. That’s because it’s dangerous and you have to run away. And then we have the other question that is sometimes asked: Can a robot feel redness? Or how do we know that we are conscious? Because we can feel a sunset or we feel the enormous splendor of nature but robots can’t, right? Well I don’t believe in that, because back in the old days people used to ask the question, “What is life?” I still remember, as a kid, all these essays and articles written about “What is life?” That question has pretty much disappeared. Nobody asks that question anymore because we now know—because of biotechnology, the degradations—it’s a very complicated question. It’s not just living and non-living. We have all sorts of viruses and all sorts of things in between. So we now realize that the question “What is life?” has pretty much disappeared. So I think the question of “What is consciousness and can consciousness understand redness in a machine?” will also gradually just disappear. One day we will have a machine that understands redness much better than us. It’ll be able to understand the electromagnetic spectrum, the poetry, be able to analyze the law of redness, history of redness, much better than any human. And the robot will say, “Can humans understand redness? I don’t think so.” One day, robots will have so much access to the Internet—so much access to sensors—that they will understand redness in a way that most humans cannot and robots will conclude that, “My god, humans cannot understand redness.”

Granted, consciousness arises out of the brain. But what is consciousness itself? What, for instance, is the sense of redness?

Well, if you take a look at the circuitry of the brain, you realize that the sensors of the brain are limited. Sometimes they can be mis-wired; that’s called synesthesia. And you realize that we have certain parts of the brain that register certain kinds of senses, including redness. Now then the question is, can blind people understand redness, right? And the answer is no, but they have the receptors—they have the apparatus there that can allow them to understand redness, but they don’t. So ultimately, I think you can create a robot—a robot, which will have the same sensors, the same abilities—to understand redness much better than a human and be able to recite poetry, be able to have eloquent statements about the essence of redness much better than any human poet can. Then the question is, well, does the robot understand redness? At that point the question becomes irrelevant, because the robot can talk, feel, express the concept of redness many, many, many times better than any human. But what’s going on in the mind? Well, a bunch of circuits or a bunch of neurons firing and so on and so forth. And that will be redness.

What is self-awareness?

Well, again there are thousands of papers written about self-awareness and I have to make a definition in one sentence. My definition is very simple: Self-awareness is when you put yourself in that model. So this model of space, your relationship to other humans, and then relationship to time, when you put yourself in that model that is self-awareness. And then you ask the question, well, are robots self-aware? Well the answer is obviously no. When the robot Watson beat two humans on the game show Jeopardy on national TV, many people thought, “Uh-oh the robots are coming; they’re going to put us in zoos. They’re going to throw peanuts at us and make us dance behind bars just like we do that with bears.” Wrong. Watson has no sense of self-awareness. You cannot go to Watson and slap it on his back and say, “Good boy, good boy, you just beat two humans on Jeopardy.” Look, Watson doesn’t even know that it is a computer. Watson doesn’t know what a human is. Watson doesn’t even know that it won this prize of beating two humans on a game show because he does not have a model of itself as a machine, a model of humans as being made out of flesh and blood, and he doesn’t have the three categories of intelligence other than understanding space and being able to navigate facts on the internet. So again, self-awareness I have to define it. Self-awareness is when you put yourself in this model of space, time, and relationship to others.

Are we merely biological machines?

Well, we are definitely biological machines. Okay, there’s no question about that. The question is, what does that mean? What does that mean for people’s feelings about the universe [and] sense of who they are? Are humans special in that sense from animals? Well, I’ve looked at a continuum. If you take a look at our own evolution and you were to believe, for example, that only humans are conscious (which is the dominant position of most psychologists and most people in the field), that humans really are different, we are conscious, animals or not. That is the dominant position in the entire field. But if you take our evolutionary history, at what point did we suddenly become conscious? There’s a continuum of our ancestors going back millions, in fact, billions of years and then you say, “Well at what point did we suddenly become conscious?” and then you begin to realize that, hey this is a stupid question. Consciousness itself probably has a continuum. It has stages as I mentioned, but consciousness probably has a continuum and so, in that sense, we are linked to the animal kingdom. Now are we special? Again, it depends on how you define special—how you define soul. What I’m saying is, if you give me a criterion, that is, are we x, y, z? Then what I say is, “Okay how do you measure it?” Give me an experiment that I can put a human in a box by which I can measure this criterion that you give me. So are we biological machines? The answer is yes, but what does that mean? Does a machine have a soul? Does a machine have something more? Well, define more. Define soul. Define essence. Give me a definition and then I will give you an experiment by which we can differentiate yes or no. That’s how we physicists think.

What’s the future of the human brain?

Well, first of all I think that brain-machine interface is going to explode in terms of developments, financing, and breakthroughs. The Pentagon is putting tens of millions of dollars into the revolutionary prosthetics program because think of the thousands of veterans of Iraq and Afghanistan who had injured spinal cords, no arms, no legs. We can connect the brain directly now to a mechanical arm [or] mechanical leg. At the next international soccer games, the person who starts the Brazilian World Cup Games will be partially paralyzed, wearing an exoskeleton. In fact, my colleague Stephen Hawking, the noted cosmologist, he has lost control of his fingers now, so we have connected his brain to a computer. The next time you see him on television, look at his right frame. In his right frame there’s an antenna there with a chip in it that connects him to a laptop computer. And we now have, in this sense, telepathy. We’re now able to actually take human thoughts and carry out movements of objects in the material world. People who are totally paralyzed can now read email, write email, surf the web, do crossword puzzles, operate their wheelchair, operate household appliances and they are totally paralyzed—they are vegetables. We’ve done this with animals. We’ve done this with humans. And in the future, because you ask about the future, we will also have artificial memories as well. Last year for the first time in world history, we recorded a memory and implanted a memory into the brain. At Wake Force University and also in Los Angeles, you take a mouse, teach the mouse how to sip water from a flask, and then look at the hippocampus, record the impulses ricocheting across the hippocampus (which is the gateway to memory), record it, and then later, when the mouse forgets how to do this, you re-insert the memory back into the hippocampus and bingo! The mouse learns on the first try. Next, will be primates. For example, a primate eating a banana or learning how to manipulate a certain kind of toy. That memory can be recorded and then re-inserted into the brain. And the short-term goal is to create a brain pacemaker. A brain pacemaker whereby people with Alzheimer’s could just simply push a button and they will know where they live, they will know who they are, they will know who their kids are, and beyond that, even more complex memories. Maybe we’ll be able to record a memory of a vacation you never had and be able to upload that vacation. Or you’re a college student learning calculus by simply pushing a button. Or if you’re a worker that’s been laid off because of technology, why not upgrade your skills? These are all possibilities that are real because now the politicians are getting interested in this, and they’re putting big bucks to the tune of a billion dollars into the brain initiative.

How will artificial intelligence change our view of humanity?

Well, we realize that democracy is perhaps the worst form of government except for all the others that have been tried, said Winston Churchill, and people will democratically vote. They will democratically decide how the human race will evolve. For example: designer children. We cannot do that today, but it’s coming. The day will come when parents will decide what genes they want to have propagated into their kids. Already, for example, if you’re Jewish in Brooklyn and you have the potential of Tay-Sachs, a horrible genetic disease, you can be tested and the embryos can be tested and you can abort them. So you have already a form of genetic engineering taking place right now, today. We can actually genetically engineer certain disease genes out of your gene pool. That’s today. In the future we may be able to deliberately do this. And so we begin to realize that we may have the power of controlling our genetic destiny. And the same thing with intelligence: If we have the ability to upload memories, perhaps we’ll have the ability to have super memories—to have a library of memories so that we can learn calculus and learn all the different subjects that we flunked in college—and have them inserted into our mind. And so as the decades go by, we may have these superhuman abilities. And with exoskeletons we may be able to live on Mars and live on other planets with skeletons that allow us to have superpowers and the ability to breath in different atmospheres and things like this. My point is that in a democracy people will decide for themselves. We cannot decide. We cannot say that that’s immoral, that’s moral. People in the future will democratically decide how they want their genetic heritage and how they want their physical heritage to be propagated.

Friday, June 06, 2014

Mirror Neurons Are Essential, But Not in the Way You Think (Nautilus)

Mirror neurons, as Christian Jarrett has twice asserted, are the likely the most over-hyped concept in neuroscience. In the paragraph below, the solution to the mystery is stated, but it is not named.
Despite her apt framing of the adaptation hypothesis, [Cecilia] Heyes actually argues against it. If she is right, then we’re all simply born with one set of visual neurons that becomes activated when observing an action, and a second set of motor neurons that activates when executing an action. Sometimes the activity of those neurons becomes correlated—perhaps because the two events occurred closely in time, or because one regularly precedes the other—and those motor neurons become mirror neurons as a result.
The activity of visual neurons and motor neurons are correlated because that is the basic foundation of learning, and they appear to be synchronized because the brain is fully capable of parallel processing. This brief summary from Sigman & Dehaene (The Journal of Neuroscience, 2008; 28(30):7585–7598) should suffice:
According to a prominent theory, which emerged from numerous behavioral experiments, perceptual and response operations occur in parallel, and only a central decision stage, involved in coordinating sensory and motor operations, is delayed (Pashler, 1994).
There you go - mystery solved. For a more in-depth explanation, I refer interested readers to "Eight Problems for the Mirror Neuron Theory of Action Understanding in Monkeys and Humans," by Gregory Hickok (J Cogn Neurosci. Jul 2009; 21(7): 1229–1243).

Anyway, this is an entertaining article from Nautilus on how mirror neuron research is refuting many of the initial claims.

Mirror Neurons Are Essential, but Not in the Way You Think


Posted By Jason G. Goldman on Jun 04, 2014



A “brainbow”: neurons labels with fluorescent tags, in this case, from a mouse. Stephen J. Smith via Wikipedia

In his 2011 book, The Tell-Tale Brain: A Neuroscientist's Quest for What Makes Us Human, neuroscientist V. S. Ramachandran says that some of the cells in your brain are of a special variety. He calls them the “neurons that built civilization,” but you might know them as mirror neurons. They’ve been implicated in just about everything from the development of empathy in earlier primates, millions of years ago, to the emergence of complex culture in our species.

Ramachandran says that mirror neurons help explain the things that make us so apparently unique: tool use, cooking with fire, using complex linguistics to communicate.

It’s an inherently seductive idea: that one small tweak to a particular set of brain cells could have transformed an early primate into something that was somehow more. Indeed, experimental psychologist Cecilia Hayes wrote in 2010 (pdf), “[mirror neurons] intrigue both specialists and non-specialists, celebrated as a ‘revolution’ in understanding social behaviour and ‘the driving force’ behind ‘the great leap forward’ in human evolution.”

The story of mirror neurons begins in the 1990s at the University of Parma in Italy. A group of neuroscientists were studying rhesus monkeys by implanting small electrodes in their brains, and they found that some cells exhibited a curious kind of behavior. They fired both when the monkey executed a movement, such as grasping a banana, and also when the monkey watched the experimenter execute that very same movement.

It was immediately an exciting find. These neurons were located in a part of the brain thought solely responsible for sending motor commands out from the brain, through the brainstem to the spine, and out to the nerves that control the body’s muscles. This finding suggested that they’re not just used for executing actions, but are somehow involved in understanding the observed actions of others.

After that came a flood of research connecting mirror neurons to the development of empathy, autism, language, tool use, fire, and more. Psychologist and science writer Christian Jarrett has twice referred to mirror neurons as “the most hyped concept in neuroscience.” Is he right? Where does empirical evidence end and overheated speculation begin?



Neurons in the dentate gyrus in the brain of a person with epilepsy MethoxyRoxy via Wikipedia

The strongest claim about mirror neurons is that they are responsible for human uniqueness, because they turned us into a singularly social primate. “It is widely believed that hyper-sociality is what makes humans ‘special,’ the key to understanding why it is we, and not the members of any other species, who dominate the world with our language, artefacts and institutions,” wrote Heyes in her 2010 commentary. “Therefore, in the light of this ‘adaptation hypothesis,’ mirror neurons emerge as an evolutionary foundation of human uniqueness…If mirror neurons are an adaptation, and more ‘advanced’ in humans than in monkeys, they may well play a major role in explaining the evolutionary origins and online control of human social cognition,” she wrote. Indeed, that’s the same claim that Ramachandran makes.

But recent research casts doubt on the adaptation hypothesis. Increasing evidence indicates that the so-called “mirror effect” in brain cells can be enhanced, abolished, or even reversed due to the effects of learning. The mirror-neuron systems of dancers and musicians, for example, have different properties than those of others, and those tool-sensitive mirror neurons in monkeys only come about as a result of experience with tools.

Despite her apt framing of the adaptation hypothesis, Heyes actually argues against it. If she is right, then we’re all simply born with one set of visual neurons that becomes activated when observing an action, and a second set of motor neurons that activates when executing an action. Sometimes the activity of those neurons becomes correlated—perhaps because the two events occurred closely in time, or because one regularly precedes the other—and those motor neurons become mirror neurons as a result.

That means that mirror neurons didn’t evolve, per se. What evolved is the mechanism that produces mirror neurons: associative learning, our ability to identify statistical patterns in the world, to associate one event with another, like the ringing of a bell with a tasty dinner. And associative learning is present in a wide variety of species, meaning that its mere presence can’t be the “evolutionary foundation of human uniqueness.”

If mirror neurons are formed routinely by learning, that may explain the fact that since their initial discovery, researchers have spotted a wide variety of mirror neurons, in different parts of the brain. Some are highly tuned to one particular kind of action, firing for only precision grips (grasping an object between two fingers) rather than whole-hand grips. Others reliably fire for both kinds of grasping actions, but not for other sorts of behaviors. Some mirror neurons are willing to fire when an object is being grasped with a tool, like a pair of pliers, rather than with a hand. Others demand a biological hand in order to activate.

One study found that mirror neurons would still fire even if the action was hidden behind an occluder, revealing that they work when the monkey must imagine the action in their mind’s eye, but others do not. Some studies have found that some mirror neurons will fire if a certain action is viewed from the front, side, or back of the performer, while others fire only when an action is witnessed from a particular viewpoint.

Some mirror neurons are perfectly happy to fire when the performer is on a video screen, but others require a live, in-the-flesh performer. There are also, according to some reports, so-called “auditory mirror neurons,” discharging both when the monkey hears itself do an action, and when the monkey hears someone else do the same action.

Perhaps most interestingly, some mirror neurons appear to “care” about the goal or reward value of the performer’s actions, rather than the action itself. For example, in one experiment, monkeys viewed an actor reaching for and grasping an object, and then either eating it or placing it into a container. Some mirror neurons fired for the eating action, others only for the placing action, and others fired in both cases.

Since mirror neurons seem to be involved with so much that our brains do, it is less likely that they suddenly appeared and made one momentous change to one of our ancestors. Instead, it seems like they’re a widespread part of how neurons become organized as we learn.

Even if they’re the result of experience, even if they’re a byproduct of the evolution of associative learning rather than adaptations themselves, mirror neurons could still play an important role in complex social phenomena. Heyes compares mirror neurons to the neural systems used in reading. “The neural mechanisms involved in reading did not evolve for that ‘purpose,’ but through explicit training they are made to fulfill an important function,” she says. Mirror-neuron systems can be “recruited” in the development of social-cognitive skills like empathy, Heyes suggests, just as visual pattern-recognition systems are recruited in the development of reading skills. But mirror neurons are not “for” social cognition any more than pattern-recognition neurons are “for” reading, which has only been practiced in a very recent slice of human existence.

In reflecting on mirror neurons, it is perhaps easy to see how the reasonable, evidence-based claims about some brain cells made in the mid-1990s snowballed into the media-driven frenzy that dominates the discourse of mirror neurons now, some 20 years later. It’s a compelling idea, that a tiny group of brain cells could have a hand in all that we think of as unique to our species. Mirror neurons are indeed amazing and fascinating and worthy of awe themselves. But the difficult problem of how to understand our own species will not be solved by as simple an answer as the mirror neuron. Perhaps its time to smash the mirror neuron’s hype and steel ourselves against the inevitable seven years of bad luck.


~ Jason G. Goldman received his Ph.D. in developmental psychology at the University of Southern California in Los Angeles and writes a blog called The Thoughtful Animal, hosted at Scientific American. His doctoral research focused on the evolution and architecture of the mind, and how different early experiences might affect innate knowledge systems.

Jaak Panksepp - Affective Continuity? From SEEKING to PLAY -- Science, Therapeutics and Beyond

 

This is a cool two-part talk from the putative founder of affective neuroscience, Jaak Panksepp. His work has been essentially absorbed into the field of interpersonal neurobiology, and his recent book, The Archaeology of Mind: The Neuroevolutionary Origins of Human Emotion (2012), was released as part of that series at WW Norton (this is a revised and updated [less sciency] version of Panksepp's seminal 1998 text, Affective Neuroscience: The Foundations of Human and Animal Emotions).


Affective Continuity? From SEEKING to PLAY -- Science, Therapeutics and Beyond

Published on Nov 16, 2012

The reward SEEKING system of the brain is a general purpose emotional process that all mammals use to acquire all the resources needed for survival from daily meals to social bonds—a " go-and-find-and-get what you need and want" system for all rewards. It provides a solid foundation of eager organismic coherence for all the other primary-process emotional functions including positive ones such as LUST, CARE, and PLAY as well as negative ones such as RAGE, FEAR, and PANIC/GRIEF.
This summary will focus on the hierarchical arrangement of the affective BrainMind which provides solid affective foundations for learning and higher mental processes which then can help regulate emotions via developmental progressions where bottom-up maturational processes grounded on affective feelings give way at maturity to various top-down regulations of behavior and feelings. This kind of two-way circular-causality provide important considerations for not only envisioning the maturation of the MindBrain but potentially new Affective Balance Therapies that deploy our increasing appreciation of the importance of social joy and emotional-homeostasis in mental health and disorder. In this vision the positive forces of SEEKING especially in the form of CARE and PLAY can be used to counteract depressive despair that arises from fragile and broken social-bonds key sources of affective insecurity. Direct manipulations of the SEEKING and the closely associated PLAY system may alleviate depressive despair. Clearer images of the evolved infrastructure of the affective mind provide i) controversial new avenues for therapeutic mental-health interventions ii) more naturalistic visions of child rearing practices and iii) new visions of the cognitive facets of human minds and cultures. Some of these issues are further elaborated in The Archaeology of Mind: The Neuroevolutionary Origins of Human Emotion (2012).

Part One



Part Two


Thursday, June 05, 2014

Who's Afraid of Robots? [UPDATED]

 

Once again, from Bookforum's Omnivore blog, here is a cool collection of links on all things robotic, from killer robots to ethical robots (U.S. military!). Oh, and it seems you would probably f**k a robot, according to Gawker. I don't know, it would at least have to buy me dinner and get me drunk....

UPDATE: This morning Aeon Magazine posted an interesting and highly related article on its site, "Sexbot slaves: Thanks to new technology, sex toys are becoming tools for connection - but will sexbots reverse that trend?" by Leah Reich. Here is a little of the article:
‘Right now, we’re at an inflection point on the meaning of sexbot,’ says Kyle Machulis, the California-based world expert on sex technology. ‘Tracing the history of the term will lead you to a fork: robots for sex (idealised version: Jude Law in the movie AI), and people that fetishise being robots (clockworks, etc). There was a crossover of these in the days of alt.sex.fetish.robots, but I see less and less people fetishising the media/aesthetics, and more talking about actually having sex with robots.’
Strange times we live in, eh?

Who's afraid of robots?

Jun 4 2014
9:00AM

The Social Brain Meets the Reactive Genome: Neuroscience, Epigenetics and the New Social Biology


This is an interesting new research article from Frontiers in Human Neuroscience looks at the convergence of neuroscience, epigenetics, and sociobiology. This is certainly a big piece of the future of understanding the brain; of understanding what genes get turned on or off by trauma, diet, environment, and so on; and how all of this relates to human beings in relationship with each other.

Cool stuff, in my opinion, but also pretty geeky, so be warned.

Full Citation:
Meloni, M. (2014, May 21). The social brain meets the reactive genome: neuroscience, epigenetics and the new social biology. Frontiers in Human Neuroscience; 8:309. doi: 10.3389/fnhum.2014.00309

The social brain meets the reactive genome: Neuroscience, epigenetics and the new social biology

Maurizio Meloni
  • School of Sociology and Social Policy, Institute for Science and Society, University of Nottingham, Nottingham, UK

Abstract


The rise of molecular epigenetics over the last few years promises to bring the discourse about the sociality and susceptibility to environmental influences of the brain to an entirely new level. Epigenetics deals with molecular mechanisms such as gene expression, which may embed in the organism “memories” of social experiences and environmental exposures. These changes in gene expression may be transmitted across generations without changes in the DNA sequence. Epigenetics is the most advanced example of the new postgenomic and context-dependent view of the gene that is making its way into contemporary biology. In my article I will use the current emergence of epigenetics and its link with neuroscience research as an example of the new, and in a way unprecedented, sociality of contemporary biology. After a review of the most important developments of epigenetic research, and some of its links with neuroscience, in the second part I reflect on the novel challenges that epigenetics presents for the social sciences for a re-conceptualization of the link between the biological and the social in a postgenomic age. Although epigenetics remains a contested, hyped, and often uncritical terrain, I claim that especially when conceptualized in broader non-genecentric frameworks, it has a genuine potential to reformulate the ossified biology/society debate.


After Gene-Centrism: the New Social Biology


Profound conceptual novelties have interested the life-sciences in the last three decades. In several disciplines, from neuroscience to genetics, we have witnessed a growing (and parallel) crisis of models that tended to sever biological factors from social/environmental ones. This possibility of disentangling neatly what seemed to belong to the “biological” from the “environmental” and to attribute a sort of causal primacy to biological factors (equated with genetic) in opposition to social or cultural ones (thought of as being more superficial, or appearing later in the ontology of development) was part and parcel of very vocal research-programs in the 1990s. These programs were all more or less heirs of the gene-centrism of sociobiology: from evolutionary psychology, to a powerful nativism that was very influential in psychology and cognitive neuroscience with its obsessive emphasis on hardwiring culture or morality into the brain.

These programs have always received a barrage of criticisms from several intellectual traditions (Griffiths, 2009; Meloni, 2013a), particularly those with roots in ethology (Lehrman, 1953, 1970; Bateson, 1991; Bateson and Martin, 1999), and developmental biology (West and King, 1987; Griffiths and Gray, 1994; Gottlieb, 1997; Oyama, 2000a[1985],b; Oyama et al., 2001; Griffiths, 2002; Moore, 2003). However, never as in this last decade, we have had scientific evidence that the dichotomous view of biology vs. society and biology vs. culture is biologically fallacious (Meaney, 2001a).

Paradoxically, it was exactly the completion of the Human Genome Project that showed that the view of the gene as a discrete and autonomous agent powerfully leading traits and developmental processes is more of a fantasy than actually being founded on scientific evidence, as highlighted by the “missing heritability” case (Maher, 2008). The image of a distinct, particulate gene marked by “clearly defined boundaries” and performing just one job, i.e., coding for proteins, has been overturned in recent years (Griffiths and Stotz, 2013: 68; see also Barnes and Dupré, 2008; Keller, 2011). Although discussions are far from being settled, the work of the ENCODE consortium for instance has been crucial in showing the important regulatory functions of what, in a narrow “gene-centric view”, was supposed to be mere “junk DNA” (Encode, 2007, 2012; Pennisi, 2012). Not only does a very small percentage of the genome (less than 2%) act according to the classical definition of the gene as a protein-coding sequence, but most of the non-protein coding DNA in fact plays an important regulatory function. The genome is therefore today best described as a “vast reactive system” (Keller, 2011) embedded in a complex regulatory network with distributed specificity (Griffiths and Stotz, 2013). An important part of this regulatory network is involved in responding to environmental signals, which can cover a very broad range of phenomena, from the cellular environment around the DNA, to the entire organism and, in the case of human beings, their social and cultural dynamics.

To sum up a decade of empirical and conceptual novelties the conceptualization of the gene has become dynamic and “perspectival” (Moss, 2003), in what can be called the new “postgenomic view1”; it addresses genes as part of a broader regulative context, “embedded inside cells and their complex chemical environments” that are, in turn, embedded in organs, systems and societies (Lewkowicz, 2010). Genes are now seen as “catalysts” more than “codes” in development (Elman et al., 1996), “followers” rather than “leaders” in evolution (West-Eberhard, 2003; Robert, 2004). The more genetic research has gone forward, the more genomes are seen to “respond in a flexible manner to signals from a massive regulatory architecture that is, increasingly, the real focus of research in ‘genetics’” (Griffiths and Stotz, 2013: 2; see also Barnes and Dupré, 2008; Dupré, 2012).

As Michael Meaney (2001a: 52, 58) wrote more than a decade ago: “There are no genetic factors that can be studied independently of the environment, and there are no environmental factors that function independently of the genome… . At no point in life is the operation of the genome independent of the context in which it functions.” Moreover, “environmental events occurring at a later stage of development … can alter a developmental trajectory” making meaningless any linear regression studies of nature and nurture. Genes are always “genes in context”, “context-dependent catalysts of cellular changes, rather “controllers” of developmental progress and direction” (Nijhout, 1990: 444), susceptible to be reversed in their expression by individual’s experiences during development (Champagne and Mashoodh, 2009).


Epigenetics


The recent surge of interest in molecular epigenetics is probably the most visible example of these conceptual changes in contemporary biology. After a delay of almost fifty years from its coining, epigenetics has become a “buzzword” in XXI century biology (Jablonka and Raz, 2009: 131): the vertical growth of publications in the field in the last decade certifies this epidemic of epigenetics (Haig, 2012; Jirtle, 2012). It is far from my intention to oversell the conceptual and evidential strength of a discipline still as embryonic, multiple, and contested as molecular epigenetics. Many things in epigenetics remain highly controversial and debated, and cautiousness in dealing with its relevance, especially for humans, remains a good scientific policy (Feil and Fraga, 2012). Moreover, the notion of epigenetics is elusive and plastic, meaning different things for different research contexts (Morange, 2002; Bird, 2007; Ptashne, 2007; Dupré, 2012; Griffiths and Stotz, 2013). Despite (or, more likely, just because of) this semantic ambiguity epigenetics prospers as a scientific and social phenomenon in need of careful reflective scrutiny (Meloni and Testa, in press).

Also, the genealogy of epigenetics in biological thought is complex, and its current molecular “crystallization” is the result of a series of important conceptual shifts (Jablonka and Lamb, 2002; Haig, 2012; Griffiths and Stotz, 2013). The notion was firstly coined by embryologist and developmental biologist C. H. Waddington (1905–1975) in the 1940s as a neologism from epigenesis to define, in a broader non-molecular sense, the “whole complex of developmental processes” that connects genotype and phenotype (reprinted in Waddington, 2012). For Waddington epigenetics was “the branch of biology which studies the causal interactions between genes and their products which bring the phenotype into being” (Waddington, 1968 see Jablonka and Lamb, 2002).

A parallel origin of the concept is having probably a stronger influence on the present understanding of epigenetics. This latter tradition originates with Nanney’s (1958) paper in Epigenetic Control Systems, and refers more specifically to the existence of a second non-genetic system, at the cellular level, that regulates gene expression (Nanney, 1958; see, Haig, 2004; Griffiths and Stotz, 2013).

It is this second narrower molecular meaning that is becoming increasingly influential in the contemporary literature (Griffiths and Stotz, 2013). This is why it is probably more correct to call contemporary epigenetics “molecular epigenetics” to differentiate it from the broader Waddingtonian sense and the developmentalist-embryological tradition in which the term was firstly conceived, although it is true that the two meanings are not in principle irreconcilable as they both emphasize the context (molecular or at the level of the organism) where genetic functioning takes place (Hallgrímsson and Hall, 2011).

In the present mainstream molecular sense, a rather standard and very often quoted definition of “epigenetics” is “the study of mitotically and/or meiotically heritable changes in gene function that cannot be explained by changes in DNA sequence” (my italics, Russo et al., 1996, quoted in Bird, 2007: 396; see also Feng and Fan, 2009). This definition in a negative form is pretty typical even in less technical books, where we find epigenetics called as the study of all the “long-term alterations of DNA that don’t involve changes in the DNA sequence itself” (Francis, 2011: X, my italics).

In a broader but still negative form, epigenetics can be defined as any “phenotypic variation that is not attributable to genetic variation” (Haig, 2012: 15, my italics). If we search for an operationally positive definition (more rare), we can call molecular epigenetics “the active perpetuation of local chromatin states” (Bird and Macleod, 2004 quoted in Richards, 2006: 395) or the self-perpetuation of gene expression “in the absence of the original signal that caused them” (Dulac, 2010: 729). The preferred recourse to a negative definition not only reflects the uncertainty surrounding the range and stability of epigenetic mutations, but more importantly it makes evident the difficulties of conceptualizing epigenetics in a way that might finally go beyond a gene-centric view of heredity and phenotypic development2.

DNA methylation, the addition of a methyl group to a DNA base that can silence gene expression, is the most well-known example of an epigenetic modification. Given its crucial function as regulator of gene expression, methylation has been defined as the “prima donna” of epigenetics (Santos, quoted in Sweatt, 2013). Other possible examples of epigenetic marks include histone modifications, alterations of chromatin structure, and gene regulation by non-coding RNA.

In evolutionary terms, epigenetic changes, far from being a biological anomaly, are fundamental for developmental plasticity, the “intermediate process” by which a “fixed genome” can respond in a dynamic way to the solicitations from a changing environment, and produce different phenotypes from a single genome (Meaney and Szyf, 2005; cfr. also Robert, 2004; Gluckman et al., 2009, 2011). Recent studies (Kucharski et al., 2008; Lyko et al., 2010) on the impact of DNA methylation on the development of different phenotypes between sterile worker and fertile queen honeybees (Apis mellifera) have shown the importance of epigenetic changes (via different nutrition in this case) on the mechanism underlying developmental plasticity.

Even more interestingly, these changes in gene expression (and the phenotypic alteration that results from it) have a twofold property whose importance in rethinking the nexus of biology and social factors cannot be underestimated: (1) some epigenetic modifications, like DNA methylation, can be maintained throughout life whereas others are susceptible to change even later in life being therefore reversible under certain circumstances; and (2) some epigenetic states, against established wisdom, appear to be transmissible inter-generationally.

Point 2 especially remains very controversial because received wisdom is that these epigenetic marks are reset at each generation and therefore incapable of offering the required stability to sustain transgenerational phenotypic changes. It is true that the issue of transgenerational epigenetic inheritance remains the source of more questions than answers so far (Daxinger and Whitelaw, 2010), but novel and interesting studies are challenging the established view of inheritance (Anway et al., 2005; Rassoulzadegan et al., 2006; Hitchins, 2007; Wagner et al., 2008; Franklin et al., 2010; Saavedra-Rodriguez and Feig, 2013) and pointing at the transgenerational effects on future generations (up to four) of environmental effects via epigenetic mechanisms in the two alternative forms of: (a) germline epigenetic inheritance (where the epigenetic mark is directly transmitted, see for instance Anway et al. (2005); and (b) experience-dependent non-germline epigenetic inheritance (where the epigenetic mark is recreated in each successive generation by the re-occurrence of the inducing behavior, or “niche recreation”: Champagne, 2008, 2013a, b; Champagne and Curley, 2008; Danchin et al., 2011; Gluckman et al., 2011).

Possible examples of these latter indirect or non germline epigenetic phenomena in humans include the often quoted research on transgenerational effects on chronic disease in individuals prenatally exposed to famine during the Dutch Hunger Winter in 1944–45 (Heijmans et al., 2008; Painter et al., 2008; Veenendaal et al., 2013). In the context of the growing interest in the developmental origins of chronic noncommunicable disease in humans (the so-called “developmental origins of health and disease”, DOHaD), epigenetic research is bringing to light how, during particularly plastic phases of development, environmental cues (for instance, in the above quoted example, levels of nutrition) set up stable epigenetic markers that shape (or “program”) the organism’s later susceptibility to disease (Gluckman et al., 2011).

In a broader evolutionary perspective, epigenetic marks, and DNA methylation in particular, are becoming recognized as “candidate mechanisms” (Kappeler and Meaney, 2010; see also Danchin et al., 2011) for parental effects, the phenomenon whereby exposures in one generation to certain environmental states (for instance in this case, famine) can affect the next generation’s phenotypes without affecting their genotypes (Badyaev and Uller, 2009; Danchin et al., 2011).

Consequences for Heredity

It appears evident even from this limited survey that the consequences of epigenetics for the notion of biological inheritance are profound. By challenging the idea that heredity is the mere transmission of nuclear DNA, epigenetics has opened the doors to a broader, extended view of heredity by which information is transferred from one generation to the next by many interacting inheritance systems (Jablonka and Lamb, 2005). Epigenetic variations act as a parallel inheritance system through which the organism can respond in a more flexible and rapid way to environmental cues and transmit to different cell lineages different “interpretations” of DNA information (ibid.).

It is no longer the mere DNA sequence that is transferred inter-generationally, but, expanding on the notions of “ontogenetic niche” coined in the 1980s (West and King, 1987), it is the whole “developmental niche” (Stotz, 2008), “the set of environmental and social legacies that make possible the regulated expression of the genome during the life cycle of the organism” (Griffiths and Stotz, 2013: 110). Taking seriously the idea of a developmental niche as the proper integrative framework for extended inheritance, as Griffiths and Stotz (2013) claim, means also understanding that environmental and social factors, not only merely “genetic” factors, “carry information in development” (ibid.: 179).

The environment is therefore now seen as directly inducing variations in evolution (Jablonka and Lamb, 2005), and its role as “initiator of evolutionary novelties” clearly recognized (see also Pigliucci, 2001; West-Eberhard, 2003; Pigliucci and Muller, 2010).

In sum, the narrow, gene-centric view of inheritance that was at the core of the Modern Synthesis in evolutionary thinking has been profoundly challenged and opened to a plurality of different non-genetic mechanisms (Bonduriansky, 2012; Bonduriansky and Day, 2009; Uller, 2013). By inviting one to think that “heredity involves more than genes”, and that “new inherited variations (…) arise as a direct, and sometimes directed, response to environmental challenge” epigenetic inheritance seems close to Lamarckian ideas of soft inheritance and inheritance of acquired features (Jablonka and Lamb, 1995: 1; see also Jablonka and Lamb, 2005; Gissis and Jablonka, 2011), although clearly the interpretation of epigenetics in such a broad and heterodox conceptual framework remains debated and controversial.


Where Epigenetics Meets Neuroscience


Some of the most influential studies that are behind the recent surge of interest in epigenetics originate from or directly cut-across neuroscience research. Epigenetic research offers a key missing link in the dynamic interplay between experience and the genome in sculpting neuronal circuits especially in critical period of plasticity (Fagiolini et al., 2009). It attempts to make visible the molecular pathway that explain how transient environmental factors can lead “long-lasting modifications of neural circuits and neuronal properties” (Guo et al., 2011).

The porousness of the brain to social signals has been at the core of social neuroscience since its beginning in the 1990s. I will focus here on three streams of research that have played a crucial role in taking this openness and plasticity of the social brain to a new level. Epigenetics in this sense can be seen as the climax of that very visible process of the “socialization” of biological and neurobiological concepts that we have witnessed in action in evolutionary thinking since at least the 1990s (Meloni, 2014).

Molecular Pathways of Maternal Care in the Brain

In current epigenetic studies the story of how Michael Meaney, a neuroscientist and clinical psychologist at McGill, and Moshe Szyf, a molecular biologist and professor of pharmacology at the same McGill, met in a bar during a conference in Spain, has been told many times (Buchen, 2010; Hoag, 2011) to show the almost serendipitous encounter of a neurobiological perspective with a genetic one that is behind social epigenetic research. This interdisciplinary approach lies at the very core of Meaney’s group’s maternal care studies on the intergenerational transmission of stress and inadequate mothering in rodents (Meaney, 2001b), amongst the most known in all the epigenetic literature (along with Waterland and Jirtle’s studies on agouti mice: Waterland and Jirtle, 2003, 2004). Also the story of how this study was first rejected by Science and Nature is told to illustrate the impervious terrain that marked the beginning of epigenetic research.

Meaney et al.’s study, finally published as Epigenetic programming by maternal behavior in Nature Neuroscience (Weaver et al., 2004) has become a massively quoted article (with more than 2500 citations), almost an icon of the new linkage between behavioral exposures (in this case: maternal care and neonatal handling) and genetic expression/development in the brain.

The basic findings of the study are that increased licking and nursing activity by rat mothers altered the offspring DNA methylation patterns in the hippocampus, thus affecting “the development of hypothalamic-pituitary-adrenal responses to stress through tissue-specific effects on gene expression” (Weaver et al., 2004: 847). Even more interestingly, cross-fostering pups of non-caring mothers to affective ones, the DNA methylation phenotype reflected that of the foster mother and was maintained stably into adulthood thus shaping life-long behavioral trajectories.

This direct linkage between maternal care and neurological development (via DNA methylation) was conceptualized in terms of environmental (or epigenetic) programming that is a stable non-sequence based modification (Francis et al., 1999) of gene expression that proceeds without germline transmission. Another take-home message of Meaney’s group study is the emphasis on a critical period, the first week of life, for the effects of early experience on methylation patterns in the hippocampus. Epigenetic modifications are stably encoded during early life experiences becoming therefore the critical factor in “mediating the relationship between these experiences and long-term outcomes” (Fagiolini et al., 2009). The sustained effects of these cellular modifications “appear to form the basis for the developmental origins of vulnerability to chronic disease” (Meaney et al., 2007).

Stigmas of Trauma in the Brain

But what about epigenetic research involving more specifically humans? In 2009, another study appeared with a significant impact on the field of social epigenetic. The research, originating again from Meaney’s lab, focused on the level of DNA methylation in postmortem hippocampal tissue from two groups of suicide victims (using samples from the Quebec Suicide Brain Bank), one of which with a history of abuse (McGowan et al., 2009).

The study found higher levels of DNA methylation of the regulatory region of the glucocorticoid receptor (resulting in decreased levels of glucocorticoid receptor mRNA) in the abused group compared to the nonabused and the control group. Early life adversities therefore (childhood abuse), not suicide per se, are the key factors to explain the alteration of DNA methylation in crucial genomic regions (neuron-specific glucocorticoid receptor gene, NR3C1) in the brain.

This work, which translates Meaney’s research into human studies for the first time, is consistent with the findings of the studies on rodents and has been welcomed as biological evidence of how traumatic life experiences become embedded in the “memory” of the organism, getting “under the skin” (Hyman, 2009).

The findings of this research, along with others of McGowan et al. (2008), are consistent with the non-human animal studies of Meaney’s group about the emphasis on early life events as a critical period for the establishment of stable DNA methylation patterns, and therefore different pathways of neural development. As the study claims: “early life events can alter the epigenetic state of relevant genomic regions, the expression of which may contribute to individual differences in the risk for psychopathology” (McGowan et al., 2009: 346).

Like in Meaney’s group previous studies, the emphasis is on the effects of disruption of parental care on methylation levels in critical areas of the brain implicated in the regulation of responses to stress and anxiety disorders. More importantly, the study aims to open up important connections between variations in DNA methylation in the hippocampus and the emergence of psychiatric disorders, a topic that is becoming increasingly relevant in epigenetic research (see for instance Tsankova et al., 2007; Nestler, 2009), as it can be seen from the third and final cluster of what can be named “epigenetic neuroscience” research.

Neuroepigenetics: Mechanisms of Plasticity for the Adult Brain

A final and parallel development at the crossroads of epigenetics and neuroscience comes from the newborn sub-field of (cognitive) neuroepigenetics (Day and Sweatt, 2011; Sweatt, 2013) that focuses on how epigenetic mechanisms impact the adult brain and the central nervous system.

Neuroepigenetics aims to investigate changes in epigenetic marks that accompany neuronal plasticity and the processes of learning and memory formation/maintenance in the brain (see also Levenson and Sweatt, 2005; Borrelli et al., 2008). In a sense, epigenetic marking itself can be seen as a “persistent form of cellular memory” by which memories of past environmental events are fixed on the genome. This would explain, it has been claimed, the fact that the nervous system has co-opted this mechanism “to subserve induction of synaptic plasticity, formation of memory and cognition in general” (Levenson and Sweatt, 2006). Another task of neuroepigenetics is the understanding how epigenetic mechanisms may vary depending on the different neural circuits and behavioral tasks involved (Day and Sweatt, 2011). The main difference compared to the other studies highlighted in this section is the emphasis on the adult brain. Here, given the non-divisibility of adult neuron cells, epigenetic tags although long-lasting are non-heritable, thus setting “the roles of epigenetic mechanisms in adult neurons apart from their roles in developmental biology” (Sweatt, 2013: 627). The term neuroepigenetics is what distinguishes therefore this specific aspect of epigenetic research from other areas of developmental biology (Day and Sweatt, 2011). A new wave of publications on the epigenetics of the adult brain illustrates well the high expectations surrounding epigenetic knowledge to explain the molecular mechanisms of plasticity. In a recent article, for instance, Woldemichael et al. (2014) look at the way epigenetic processes may subserve brain plasticity in relation to, amongst other things, drug addiction and cognitive dysfunctions (age-associated cognitive decline, Alzheimer’s disease, etc.). Moreover, they do so always with an eye to the potential of epigenetic therapies to reverse neurodegenerative disorders (see also Gapp et al., 2014). Other recent publications in the field examine the epigenetics of stress vulnerability and resilience (see also Stankiewicz et al., 2013; Zannas and West, 2014), neuropsychiatric disorders (Hsieh and Heisch, 2010), major psychosis (Labrie et al., 2012), autism spectrum disorders (Ptak and Petronis, 2010), mood disorders (Fass et al., 2014); again, with an eye to the development of novel therapeutics.

Although many of these publications reflect very early attempts to use epigenetic knowledge to explain the molecular mechanisms of brain plasticity, and although in much of this literature the supposed distinctiveness of epigenetic changes in the brain rather than in other organs is never really problematized, it is still helpful to survey this emerging literature as an illustration of the current process of rewriting, in epigenetic terms, of many themes from the last decade of research about the social brain, particularly its plasticity and permeability to environmental signals. Epigenetics in this sense can be seen as the last frontier in the construction of the narrative about the sociality of the brain, the discovery of a possible crucial mechanism mediating between environmental exposures, gene expression and neuronal development, that is likely to validate and give further strength, at the molecular level, to many of the intuitions that have been at the core of social neuroscience research since the 1990s.

Implications for Social Theory


In the last two decades of research in cognitive science, mind and cognition have been understood increasingly as an extended, enacted and embodied phenomena (Clark and Chalmers, 1998; Thompson, 2007; Clark, 2008; Noë, 2009; Menary, 2010). Neuroscience has joined this trend: the brain has ceased to be represented as an isolated organ and instead become a multiply connected device profoundly shaped by environmental influences. One of the membranes demarcating the biological from the social, the skull (Hurley in Noë, 2009), has been made increasingly permeable to a two-way interaction.

The brain is increasingly thought of as a tool specifically designed to create social relationships, to reach out for human relationships and company, literally made sick by loneliness and social isolation (Cacioppo and Patrick, 2008; Hawkley and Cacioppo, 2010). The emergence of this novel language certifies to the success of a discipline like social neuroscience (Matusall et al., 2011), with its landscape populated by empathic brains and moral molecules, mirror neurons and plastic synapses.

However, in the context of this trend toward an increasing openness of the biological to social signals, the rise of molecular epigenetics promises to bring this discourse to an entirely new and more powerful level. Undoubtedly, this promissory vocabulary, which has always been part of the rhetoric of the life-sciences (as highlighted by a consistent body of scholarship in Science and Technology Studies), has not to be taken at face value. The “economy of hope” that surrounds epigenetics as a possible relaunch of the genomics discourse is in particular something that deserves critical scrutiny (Meloni and Testa, in press). However, the appreciation of this more critical moment, cannot become a reason to deny the potential contained in the epigenetic discourse, especially when conceptualized in more sophisticated non gene-centric frameworks (Griffiths and Stotz, 2013).

When compared with recent arguments about the sociality of the brain, epigenetics seems to play a twofold function. Epigenetics not only supplements social neuroscience by highlighting the molecular mechanisms that orchestrate brain plasticity and memory formation, but also seeks to blur any residual distinction between biology and social/ecological contexts. If the first model of the cognitive brain was that of a computing machine, entirely severed from environmental influences, and the brain of social neuroscience still oscillated between plastic change and hardwiring metaphors, with the rise of what can be named the “epigenetic brain” or neuroepigenetics research the reciprocal penetration of the social and the biological reaches a point where trying to establish any residual distinction seems increasingly a meaningless effort.

Particularly when conceptualized within theoretical frameworks like Developmental Systems Theory (Oyama, 2000a[1985], b; Oyama et al., 2001) and other postgenomics approaches, epigenetic research illustrates exemplarily how we are moving toward a post-dichotomous view of biosocial processes that research in social neuroscience was only partially able to anticipate. With the rise of molecular epigenetics, the biological is opened to environmental influences, to social factors, and to the marks of personal experience like never before. The sovereign role of the gene has been decentralized (Van Speybroeck, 2002) and the genome made a “reactive genome” (a term first coined by Gilbert, 2003, and expanded on more recently by Keller, 2011; Griffiths and Stotz, 2013).

At the same time the notion of vitality has been expanded to a new range of actors and “democratized” (Landecker and Panofsky, 2013). In epigenetic research, the “social” seems to assume a causative role in human biology to a degree unseen before (Landecker and Panofsky, 2013). The same emergence of a new terminology of “social and environmental programming” reflects this unprecedented prominence of the social level. Such a discourse was quite unimaginable under the Weismannian’s conception of an impenetrable barrier between soma and germ-line, as well in what can be seen as the molecular translation of Weismann’s argument (Griesemer, 2002) in the so-called Central Dogma of Molecular biology (Crick, 1958) which stated the strict one-side flow of information from DNA to RNA. In reversing the informational asymmetry between genotype and phenotype, in stressing the relevance of context (interpretation) upon the level of DNA information (Jablonka and Lamb, 2005; Jablonka and Raz, 2009) and finally in giving a life-span to genetic process, making them radically dependent on temporal factors (Landecker and Panofsky, 2013), epigenetics displays unique features that promise to radically change the language of biology and, as a consequence, the system of rules that have so far regulated the biology/society boundary.

On one level, this unprecedented porousness of the biological to the social comes as a good news for social scientists with an interest in notions of embodiment and in exploring the pathways through which the social shapes and is literally inscribed into the body. The investigation of the ways in which social structures and socio-economic differences literally get under the skin (and in the brain), affecting the deep recesses of human physiology, has always been an important concern of sociological theory, from the French doctor and economist René Villermé and Friedrich Engels in the 1800s (see Krieger and Davey Smith, 2004), to social epidemiologists (Krieger, 2001, 2004, 2011; Shaw et al., 2003; Krieger and Davey Smith, 2004) and neuroscientists (Lupien et al., 2000; Noble et al., 2005, 2007, 2012; Farah et al., 2006; Kishiyama et al., 2009; Hackman et al., 2010; Rao et al., 2010) in the early twenty-first.

However, given the epistemological and political implications of gene-centrism and the mainstream view of biology as an unchangeable form of secular destiny in the twentieth-century, these more plastic biosocial approaches have remained so far exceptions (Boas, 1910 research on the changing bodily form of immigrants and their descendants in the USA, being one of these exceptions). Under these unfavourable epistemic circumstances, the possibility of sophisticated and enriching biosocial explorations has been profoundly limited and mostly faced with skepticism by social theorists. To import the biological into the social, across the twentieth century, meant almost exclusively refer to unacceptable class, race or gender biased explanations. Facing this view of biology, disembodied social constructionist explanations that rejected biology entirely seemed (almost) the only way out for social scientists.

However, in the present scenario marked by the rise of epigenetics and the new social biology, this marginalization no longer seems compulsory for social scientists. Undoubtedly, epigenetics is likely to revitalize a social science approach interested in how “phenomena of the outside (….) undergo transformations and are incorporated to re-appear or be reproduced on the inside” (Beck and Niewöhner, 2006: 224; Niewöhner, 2011; Guthman and Mansfield, 2012). It may supplement various findings from medicine, neuroscience, and various animal studies on the way in which social phenomena (social position, socio-economic status (SES), social isolation, rank, stress, etc.) are translated into the body and affect human health. On these novel bases, a fresh dialog between social and biological disciplines in which epigenetics can penetrate the “sometimes obdurate wall between the life and social sciences” (Landecker and Panofsky, 2013: 2) seems more realistic than in the past (Rose, 2013; Meloni, 2013b, 2014).

On the other level, however, a recognition of the great potential of epigenetic research to reframe and go beyond the sterile nature/nurture opposition, is no reason to deny the ambiguities and contradictory claims aligning in the field, and the difficult methodological and epistemic questions still awaiting to be answered before any major biosocial synthesis may be proposed.

Even leaving aside hypes and controversies surrounding epigenetics, social scientists and theorists need to be aware that an entire new array of problems is emerging in the postgenomic scenario. This new complex of social problems does not derive from the dichotomous separation of biological and social causes in which the biological is supposed to have a causal primacy (as in the hostile post 1970 debates on sociobiology, genetic reductionism, or evolutionary psychology). Rather they arise for the exact opposite reason, that is, because of the inextricable mixture of social and biological factors typical of the epigenetics and postgenomic conceptual landscape.

There is a specific and in a way unprecedented profile of problems in the postgenomic age (Meloni, 2013b, 2014; Meloni and Testa, in press) that without any ambition to be conclusive I will try to sketch below. Rather than as consolidated analyses of what is likely to happen in the epigenetic era, though, these different clusters of problems can be read as preliminary questions for a possible agenda of the social studies of the life-sciences in the future years.

Postgenomic Epistemology: Molecularizing Nurture?

Epigenetic research undermines the nature/nurture opposition on both sides of the dichotomy. To the extent that genes are now “defined by their broader context”, our understanding of nature becomes less essentialist and “more epigenetic” (Griffiths and Stotz, 2013: 228), that is, always entangled with social and environmental factors. However the epistemic conditions for environmental, social or experiential factors to become readable in the epigenetic paradigm is their translation into signals at the molecular level (Landecker, 2011). This trend finds confirmation in the fact that different social categories (from race to class), and environmental factors (from maternal care, to food and toxins) are being increasingly conceptualized today in molecular terms (Landecker, 2011; Niewöhner, 2011).

Only to the extent that our understanding of nurture becomes more “mechanistic” (Griffiths and Stotz, 2013: 5) can we therefore find a solution to the nature/nurture conundrum in the postgenomic era. It is important to notice here that mechanisms are understood by Griffiths, Stotz and other philosophers of biology not as a vulgar reductionist concept but as a more sophisticated, multilevel, and emergentist notion which includes looking “upward to higher levels” (Bechtel, 2008: 21) as well as making room for the active, autonomous role of human agency.

This new version of mechanism, as Griffiths and Stotz again claim, is producing an unexpected rapprochement with themes from the holistic tradition, or as they prefer “integrationist” (ibid.: 103).

Nonetheless, although social scientists will recognize in this anti-reductionist rethinking of the notion of mechanism an appealing theoretical move, two sources of skepticism remain to be addressed: (1) that in spite of the many sophistications of philosophers of science and biology, the bulk of epigenetic research will much more naively try to do business as usual, inscribing the effects of complex social phenomena at the digitalized level of methylation marks (Meloni and Testa, in press), with serious risk of over-simplification as well as attributing causal relevance to random biological processes; and (2) that mainstream social theory will remain not convinced by any idea of the tractability of social and cultural phenomena, given the legacy of traditions (from Weberian neo-Kantism to Durkheim, from Western Marxism to Boasian anthropology: Benton, 1991; Meloni, 2011, 2014) that made anti-naturalism and the incommensurable nature of social and cultural processes the hallmark of social research.

Given these opposite limitations, complex biosocial and biocultural approaches are likely to remain a minority strategy, caught between persisting reductionist tendencies in bioscience and the continuing legacy of bio-phobia in social theory.

Postgenomic Biopolitics: “Upgrade Yourself” or Born Damaged for Ever?

The epigenome is caught in a curious dialectic of stability and modifiability (Meloni and Testa, in press). Whereas genetic sequences are fixed and unchangeable, epigenetic marks are at the same time “long lasting” but “potentially reversible” (Weaver et al., 2005; McGowan and Szyf, 2010). In its social dimension, the plasticity of the epigenome, just like the plastic brain which Catherine Malabou (2008) has written about, can be understood in two alternative ways: (i) passively, as a capacity to receive form: the epigenome, in contrast to genes, is vulnerable to environmental insults; (ii) actively, as a capacity to give form: the epigenome can change and upgrade, through diet, exercise, therapeutic and social manipulations.

In the wider society, this dialectic within the language of epigenetics is likely to become even more amplified as an oscillation between determinism and hopes of individual/social amelioration: (i) determinism, because of the concerns that social and environmental insults can leave indelible scars on the body and brain (“Babies born into poverty are damaged forever before birth” titled the UK newspaper The Scotsman (Mclaughlin, 2012), to comment on a research on levels of methylation amongst different social groups in Glasgow, of which more below); (ii) amelioration, because the upgradable epigenome may become the basis for a new motivation to intervene, control and improve it through pharmacological agents or social interventions.

On the first dimension, political theorists and bioethicists have already started to reflect upon the “collective responsibility” to protect the vulnerable epigenome (Dupras et al., 2012; Hedlund, 2012) while legal theorists are speculating on the “number of novel challenges and issues” that epigenetic transgenerational effects may represent as a new possible “source of litigation and liability” (Rothstein et al., 2009: 37). The transmissibility via the epigenome of the insults of the past into the bodies of present or future generations raises therefore novel issues of intergenerational equity. This possible moralization of behaviors around the vulnerable epigenome is having a particularly visible example on the overwhelmingly centrality of the maternal body as a target of responsibility for harmful epigenetic consequences on the child’s health (Richardson, in press).

The second pole of this dialectic of plasticity, is instead represented by the many injunctions (it is enough to surf the web for some minutes to find many examples) to “upgrade”, “improve”, “train” or “change your epigenome”. The possibility of influencing the epigenome through diet, lifestyle, physical activity, stress, tobacco, alcohol, and pharmacological intervention becomes the likely basis for new forms of “therapeutic manipulations” (McGowan and Szyf, 2010). In David Shenk’s recent The Genius in All of Us one can see iconically the mobilization of epigenetics, celebrated as a “new paradigm” and “the most important discovery in the science of heredity since the gene” (Shenk, 2010: 129), at the service of a view of unlimited plasticity and constant struggle to enhance our capacity to reach talent and brilliance (see for a comment, Papadopoulos, 2011).

Which of the two poles of this dialectic of plasticity is going to prevail in the representation of epigenetics in the wider society, and in the shaping of epigenetic science itself, remains an open question. Science and society are constantly co-produced: this two-way interaction seems particularly visible in epigenetic research, thus representing a great opportunity to make of this newly emerging discipline a theoretical spyglass to observe the vivid emergence of the tensions and complexities of the postgenomic age.

Postgenomic Social Policies?

The increasing emphasis on the biological embedding of life’s adversities at the genomic level is bringing to public attention what has been called a new “biology of social adversity” (Boyce et al., 2012). Epigenetic mechanisms are a major part of this novel approach. Epigenetics has already been used in the service of explaining the persistent nature, within specific groups, of “connections that have previously been hard to explain” (Landecker, 2011), particularly the perpetuation of health disparities between the rich and the poor, between and within countries (Vineis et al., 2013). An important trend is the use of epigenetic and developmental findings in the so-called early-intervention programmes (Shonkoff et al., 2009).

Over the last few years, a new array of studies has started to look at the way in which social influences can become embodied via epigenetic mechanisms and have lifelong and even inter-generational effects (Miller et al., 2009; Wells, 2010; Borghol et al., 2012). Kuzawa and Sweet (2009) study on racial disparities in cardiovascular health in the USA is a major example of the reconfiguration of the relationship between biological and social factors brought about by epigenetics. This work has focused on epigenetic and other developmental mechanisms as the missing link between early life environmental factors (e.g., maternal stress during pregnancy) and adult race-based health disparities in “hypertension, diabetes, stroke, and coronary heart disease”. It is an important attempt to rethink race along a different, somatic and socio-cultural together, line of thought.

In the UK, the study of McGuinness et al. (2012) on the correlation between SES and epigenetic status (variations in the level of methylation) between socio-economically deprived and more affluent groups in Glasgow (but also between manual and non-manual workers) points more empirically to an association between social neglect, poverty, and “aberrant” levels of methylation. “Global DNA hypomethylation” the study claims “was associated with the most deprived group of participants, when compared with the least deprived”. Epigenetic markers are used in this and other studies as a “bio-dosimeter” (ibid., 157) to measure the impact of social adversity on lifestyle and disease susceptibility (see also: Landecker and Panofsky, 2013).

Looking at the past two decades of attempts to use genetics and neuroscience in the public arena as the ultimate bastion of evidence for social deprivations and inequalities, it is possible that epigenetic findings will become increasingly relevant in social policy strategies. How these findings will help convince policy-makers of the “non-ethereal” nature of environmental influences in order to make “more effective arguments” about the biological impact of social forces (Miller, 2010), and influence specific political agendas (as seen in the notion of neuropolicy, see Racine et al., 2005) is difficult to foresee at this stage. It is clear however that the seductive appeal of neurobiological explanations (Wastell and White, 2012) is likely to be amplified further when combined with the seductive appeal of epigenetics, where social differences and environmental insults are expected now to be seen literally “imprinted on DNA”.

It is important however to remember the huge gap existing between public sensationalism, especially in its public health implication, and the cautious takes of the experts (Feil and Fraga, 2012; Meloni and Testa, in press). Even more ambiguously, the emergence of a possible discourse that identifies, at the local level, subgroups with abnormal epigenetic marks (reflecting the perpetuation of historically disadvantageous conditions) may create a whole new set of social and public policy questions. The legacy of soft or Lamarckian inheritance in social policy discourses has not always been particularly progressive (Bowler, 1984), and its possible returning appeal today should become a matter of reflection for social scientists (Meloni and Testa, in press). Moreover, there is increasing concern among social scientists that constructs rather widespread in epigenetics and DOHaD literature, from “maternal capital” (Wells, 2010) to the growing emphasis on maternal behaviors and the maternal body as the “vector” through which epigenetic patterns are established in early life (as highlighted by Richardson, in press), could have problematic effects on public health strategies and moral reasoning about families, parenting, and women in particular.


Conclusion


In spite of my emphasis on some ambiguities of epigenetic research, the most important lesson for social scientists and theorists at this stage is probably that the future and therefore the social meaning of postgenomics and epigenetics is not already written. As Michel Morange (2006: 356) has claimed some years ago: “the very fashionable post-genomic programs can have very different stakes, some reductionist and other holistic, depending upon who is supporting them. The current state of biological research is very contrasted, because biology is hesitating at a crossroads between reductionism and holism”. It is therefore too early to say if molecular epigenetics will become mired in another form of reductionism (Lock, 2005) or will join new exciting theoretical collaborations capable to “transcend the divide between ‘nature’ and ‘nurture’ intellectually and methodologically” (Singh, 2012). Epigenetics is not set in stone, but an open field where theoretical debates and critiques are vital (Landecker and Panofsky, 2013). Given the multiple and plastic nature of its same concept, at the crossroads of different traditions and research-styles, epigenetics will likely be a terrain for conceptual battle between different stakeholders and intellectual agendas. This is probably one further reason for social scientists to be part of this debate from its very beginning.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

I thank Tobias Uller and Frances A. Champagne for kindly commenting on the first section of this article (of course, I am solely responsible for any possible inaccuracy there), and Andrew Turner for his help with the English language in the text. Thanks to the two referees for their extremely helpful remarks, many of which are reflected in the final iteration. I acknowledge the contribution of a Marie Curie ERG grant, FP7-PEOPLE-2010-RG (research titled “The Seductive Power of the Neurosciences: An Intellectual Genealogy”).

Footnotes
  1. ^ Here postgenomics has to be understood in a twofold meaning: chronologically it refers to what has happened after the deciphering of the Human Genome in 2003; epistemologically it illustrates the emergence of a number of gaps in knowledge and unforeseen complexities surrounding the gene that has led to the current contextual conceptualization of the genome as affected by environmental signals and part of a broader regulative architecture (Dupré, 2012; Griffiths and Stotz, 2013). It is particularly this latter meaning that is central here.
  2. ^ I thank one of the two anonymous reviewers for bringing this to my attention.
References are available at the Frontiers site