Offering multiple perspectives from many fields of human inquiry that may move all of us toward a more integrated understanding of who we are as conscious beings.
This is from Nautilus, this is an interesting look at the future of super-intelligent humans. I suspect we are not likely to see this in my lifetime (I'm 47), but I have no doubt that as we learn more and more about genetics and intelligence, it is inevitable that we will try to create who cities of the super-smart.
The question is will they be smart enough to realize they were created to be pawns in the battles between oligarchies.
Genetic engineering will one day create the smartest humans who have ever lived.
By Stephen Hsu Illustration by Gemma O’Brien
October 16, 2014
Lev Landau, a Nobelist and one of the fathers of a great school of Soviet physics, had a logarithmic scale for ranking theorists, from 1 to 5. A physicist in the first class had ten times the impact of someone in the second class, and so on. He modestly ranked himself as 2.5 until late in life, when he became a 2. In the first class were Heisenberg, Bohr, and Dirac among a few others. Einstein was a 0.5! My friends in the humanities, or other areas of science like biology, are astonished and disturbed that physicists and mathematicians (substitute the polymathic von Neumann for Einstein) might think in this essentially hierarchical way. Apparently, differences in ability are not manifested so clearly in those fields. But I find Landau’s scheme appropriate: There are many physicists whose contributions I cannot imagine having made. I have even come to believe that Landau’s scale could, in principle, be extended well below Einstein’s 0.5. The genetic study of cognitive ability suggests that there exist today variations in human DNA which, if combined in an ideal fashion, could lead to individuals with intelligence that is qualitatively higher than has ever existed on Earth: Crudely speaking, IQs of order 1,000, if the scale were to continue to have meaning.
The Overnight Genius: Actor Cliff Robertson, who plays the part of bakery worker-turned-genius Charly (spelled Charlie in the novel), studies an illustration of a maze in this scene from the 1967 film adaptation of Flowers for Algernon.Photo by Cinerama/Courtesy of Getty Images In Daniel Keyes’ novel Flowers for Algernon, a mentally challenged adult called Charlie Gordon receives an experimental treatment to raise his IQ from 60 to somewhere in the neighborhood of 200. He is transformed from a bakery worker who is taken advantage of by his friends, to a genius with an effortless perception of the world’s hidden connections. “I’m living at a peak of clarity and beauty I never knew existed,” Charlie writes. “There is no greater joy than the burst of solution to a problem… This is beauty, love, and truth all rolled into one. This is joy.” The contrast between a super-intelligence and today’s average IQ of 100 would be greater still.
The possibility of super-intelligence follows directly from the genetic basis of intelligence. Characteristics like height and cognitive ability are controlled by thousands of genes, each of small effect. A rough lower bound on the number of common genetic variants affecting each trait can be deduced from the positive or negative effect on the trait (measured in inches of height or IQ points) of already discovered gene variants, called alleles. The Social Science Genome Association Consortium, an international collaboration involving dozens of university labs, has identified a handful of regions of human DNA that affect cognitive ability. They have shown that a handful of single-nucleotide polymorphisms in human DNA are statistically correlated with intelligence, even after correction for multiple testing of 1 million independent DNA regions, in a sample of over 100,000 individuals. If only a small number of genes controlled cognition, then each of the gene variants should have altered IQ by a large chunk—about 15 points of variation between two individuals. But the largest effect size researchers have been able to detect thus far is less than a single point of IQ. Larger effect sizes would have been much easier to detect, but have not been seen. This means that there must be at least thousands of IQ alleles to account for the actual variation seen in the general population. A more sophisticated analysis (with large error bars) yields an estimate of perhaps 10,000 in total.1 Each genetic variant slightly increases or decreases cognitive ability. Because it is determined by many small additive effects, cognitive ability is normally distributed, following the familiar bell-shaped curve, with more people in the middle than in the tails. A person with more than the average number of positive (IQ-increasing) variants will be above average in ability. The number of positive alleles above the population average required to raise the trait value by a standard deviation—that is, 15 points—is proportional to the square root of the number of variants, or about 100. In a nutshell, 100 or so additional positive variants could raise IQ by 15 points. Given that there are many thousands of potential positive variants, the implication is clear: If a human being could be engineered to have the positive version of each causal variant, they might exhibit cognitive ability which is roughly 100 standard deviations above average. This corresponds to more than 1,000 IQ points.
Supersize me: The breeding of domesticated plants and animals has changed some populations by as much as 30 standard deviations. Broiler chickens, for example, have increased in size more than four times since 1957. A similar approach could be applied to human intelligence, leading to IQs greater than 1,000.Zuidhof, M.J., Schneider, B.L., Carney, V.L., Korver, D.R., & Robinson, F.E. Growth, efficiency, and yield of commercial broilers from 1957, 1978, and 2005. Poultry Science 93, 1-13 (2014).
It is not at all clear that IQ scores have any meaning in this range. However, we can be confident that, whatever it means, ability of this kind would far exceed the maximum ability among the approximately 100 billion total individuals who have ever lived. We can imagine savant-like capabilities that, in a maximal type, might be present all at once: nearly perfect recall of images and language; super-fast thinking and calculation; powerful geometric visualization, even in higher dimensions; the ability to execute multiple analyses or trains of thought in parallel at the same time; the list goes on. Charlie Gordon, squared. To achieve this maximal type would require direct editing of the human genome, ensuring the favorable genetic variant at each of 10,000 loci. Optimistically, this might someday be possible with gene editing technologies similar to the recently discovered CRISPR/Cas system that has led to a revolution in genetic engineering in just the past year or two. Harvard genomicist George Church has even suggested that CRISPR will allow the resurrection of mammoths through the selective editing of Asian elephant embryo genomes. Assuming Church is right, we should add super-geniuses to mammoths on the list of wonders to be produced in the new genomic age. SOME of the assumptions behind the prediction of 1,000 IQs are the subject of ongoing debate. In some quarters, the very idea of a quantification of intelligence is contentious. In his autobiographical book Surely You’re Joking, Mr. Feynman!, the Nobel Prize winning physicist Richard Feynman dedicated an entire chapter to his quest to avoid the humanities, called “Always Trying to Escape.” As a student at the Massachusetts Institute of Technology, he says, “I was interested only in science; I was not good at anything else.” The sentiment is a familiar one: Common wisdom sometimes says that people who are good at math are not so good with words, and vice versa. This distinction has affected how we understand genius, suggesting it is an endowment of one particular faculty of the brain, and not a general superlative of the whole brain itself. This in turn makes the idea of apples-to-apples comparisons of intelligence moot, and the very idea of a 1,000 IQ problematic. But psychometric studies, which seek to measure the nature of intelligence, paint a different picture. Millions of observations have shown that essentially all “primitive” cognitive abilities—short and long term memory, the use of language, the use of quantities and numbers, the visualization of geometric relationships, pattern recognition, and so on—are positively correlated. The figure below displays graphically the ability scores of a large group of individuals, in areas such as mathematical, verbal, and spatial performance. The space of the graph is not filled uniformly, but instead the points cluster along an ellipsoidal region with a single long (or principal) axis.
Smart is smart: The Project Talent study looked at the mathematical, verbal, and spatial skills of over 100,000 ninth-graders, as displayed in this scatterplot. Ability in one area was positively correlated with ability in the other two.
These positive correlations between narrow abilities suggest that an individual who is above average in one area (for example, mathematical ability) is more likely to be above average in another (verbal ability). They also suggest a robust and useful method for compressing information concerning cognitive abilities. By projecting the performance of an individual onto the principal axis, we can arrive at a single number measure of cognitive ability results: the general factor g. Well-formulated IQ tests are estimators of g. Does g predict genius? Consider the Study of Mathematically Precocious Youth, a longitudinal study of gifted children identified by testing (using the SAT, which is highly correlated with g) before age 13. All participants were in the top percentile of ability, but the top quintile of that group was at the one in 10,000 level or higher. When surveyed in middle age, it was found that even within this group of gifted individuals, the probability of achievement increased drastically with early test scores. For example, the top quintile group was six times as likely to have been awarded a patent than the lowest quintile. Probability of a STEM doctorate was 18 times larger, and probability of STEM tenure at a top-50 research university was almost eight times larger. It is reasonable to conclude that g represents a meaningful single-number measure of intelligence, allowing for crude but useful apples-to-apples comparisons. Another assumption behind the 1,000-IQ prediction is that cognitive ability is strongly affected by genetics, and that g is heritable. The evidence for this assumption is quite strong. In fact, behavior geneticist and twins researcher Robert Plomin has argued that “the case for substantial genetic influence on g is stronger than for any other human characteristic.”2
In twin and adoption studies, pairwise IQ correlations are roughly proportional to the degree of kinship, defined as the fraction of genes shared between the two individuals. Only small differences due to family environment were found: Biologically unrelated siblings raised in the same family have almost zero correlation in cognitive ability. These results are consistent over large studies conducted in a variety of locations, including different countries. In the absence of deprivation, it would seem that genetic effects determine the upper limit to cognitive ability. However, in studies where subjects have experienced a wider range of environmental conditions, such as poverty, malnutrition, or lack of education, heritability estimates can be much smaller. When environmental conditions are unfavorable, individuals do not achieve their full potential (see The Flynn Effect).
Sidebar: The Flynn Effect
The Flynn effect, named after the philosopher James Flynn, refers to a significant increase in raw cognitive scores over the last 100 years or so—the equivalent of two standard deviations in some cases. This raises a number of thorny issues. Were our ancestors idiots? Is cognitive ability really so malleable under environmental influence (contrary to what has been found in recent twin studies)? The average person 100 years ago was massively deprived by today’s standards— much more so than we would ever be allowed to reproduce in a modern twin study. United States gross domestic product per capita is eight times higher now, and the average number of years people spend in school has increased dramatically. In the America of 1900, adults had an average of about seven years of schooling, a median of 6.5 years, and 25 percent had completed four years or less. Modern twin and adoption studies only include individuals raised in a much smaller range of environments—almost all participants in recent studies have had legally mandated educations, which in the U.S. includes at least several years of high school. There is a revealing analogy with height. While taller parents tend to have taller children (i.e., height is heritable), significant gains in average height which mirror the Flynn effect (amounting to an almost +2 standard deviation change) have been observed as nutrition and diet have improved.
SUPER-INTELLIGENCE may be a distant prospect, but smaller, still-profound developments are likely in the immediate future. Large data sets of human genomes and their corresponding phenotypes (which are the physical and mental characteristics of the individual) will lead to significant progress in our ability to understand the genetic code—in particular, to predict cognitive ability. Detailed calculations suggest that millions of phenotype-genotype pairs will be required to tease out the genetic architecture, using advanced statistical algorithms. However, given the rapidly falling cost of genotyping, this is likely to happen in the next 10 years or so. If existing heritability estimates are any guide, the accuracy of genomic-based prediction of intelligence could be better than about half a population standard deviation (meaning better than plus or minus 10 IQ points). Once predictive models are available, they can be used in reproductive applications, ranging from embryo selection (choosing which IVF zygote to implant) to active genetic editing (for example, using CRISPR techniques). In the former case, parents choosing between 10 or so zygotes could improve the IQ of their child by 15 or more IQ points. This might mean the difference between a child who struggles in school, and one who is able to complete a good college degree. Zygote genotyping from single cell extraction is already technically well developed, so the last remaining capability required for embryo selection is complex phenotype prediction. The cost of these procedures would be less than tuition at many private kindergartens, and of course the consequences will extend over a lifetime and beyond. The corresponding ethical issues are complex and deserve serious attention in what may be a relatively short interval before these capabilities become a reality. Each society will decide for itself where to draw the line on human genetic engineering, but we can expect a diversity of perspectives. Almost certainly, some countries will allow genetic engineering, thereby opening the door for global elites who can afford to travel for access to reproductive technology. As with most technologies, the rich and powerful will be the first beneficiaries. Eventually, though, I believe many countries will not only legalize human genetic engineering, but even make it a (voluntary) part of their national healthcare systems. The alternative would be inequality of a kind never before experienced in human history.
Stephen Hsu is Vice-President for Research and Professor of Theoretical Physics at Michigan State University. He is also a scientific advisor to BGI (formerly, Beijing Genomics Institute) and a founder of its Cognitive Genomics Lab. References 1. Hsu, S.D.H. On the genetic architecture of intelligence and other quantitative traits. Preprint arXiv:1408.3421 (2014). 2. Plomin, R. IQ and human intelligence. The American Journal of Human Genetics65, 1476-1477 (1999).
By Emily Singer, Quanta Magazine 10.03.14 | Permalink
Different strains of yeast grown under identical conditions develop different mutations but ultimately arrive at similar evolutionary endpoints. Daniel Hertzberg for Quanta Magazine
In his fourth-floor lab at Harvard University, Michael Desai has created hundreds of identical worlds in order to watch evolution at work. Each of his meticulously controlled environments is home to a separate strain of baker’s yeast. Every 12 hours, Desai’s robot assistants pluck out the fastest-growing yeast in each world — selecting the fittest to live on — and discard the rest. Desai then monitors the strains as they evolve over the course of 500 generations. His experiment, which other scientists say is unprecedented in scale, seeks to gain insight into a question that has long bedeviled biologists: If we could start the world over again, would life evolve the same way? Many biologists argue that it would not, that chance mutations early in the evolutionary journey of a species will profoundly influence its fate. “If you replay the tape of life, you might have one initial mutation that takes you in a totally different direction,” Desai said, paraphrasing an idea first put forth by the biologist Stephen Jay Gould in the 1980s. Desai’s yeast cells call this belief into question. According to results published in Science in June, all of Desai’s yeast varieties arrived at roughly the same evolutionary endpoint (as measured by their ability to grow under specific lab conditions) regardless of which precise genetic path each strain took. It’s as if 100 New York City taxis agreed to take separate highways in a race to the Pacific Ocean, and 50 hours later they all converged at the Santa Monica pier. The findings also suggest a disconnect between evolution at the genetic level and at the level of the whole organism. Genetic mutations occur mostly at random, yet the sum of these aimless changes somehow creates a predictable pattern. The distinction could prove valuable, as much genetics research has focused on the impact of mutations in individual genes. For example, researchers often ask how a single mutation might affect a microbe’s tolerance for toxins, or a human’s risk for a disease. But if Desai’s findings hold true in other organisms, they could suggest that it’s equally important to examine how large numbers of individual genetic changes work in concert over time.
Michael Desai, a biologist at Harvard University, uses statistical methods to study basic questions in evolution. Sergey Kryazhimskiy
“There’s a kind of tension in evolutionary biology between thinking about individual genes and the potential for evolution to change the whole organism,” said Michael Travisano, a biologist at the University of Minnesota. “All of biology has been focused on the importance of individual genes for the last 30 years, but the big take-home message of this study is that’s not necessarily important. The key strength in Desai’s experiment is its unprecedented size, which has been described by others in the field as “audacious.” The experiment’s design is rooted in its creator’s background; Desai trained as a physicist, and from the time he launched his lab four years ago, he applied a statistical perspective to biology. He devised ways to use robots to precisely manipulate hundreds of lines of yeast so that he could run large-scale evolutionary experiments in a quantitative way. Scientists have long studied the genetic evolution of microbes, but until recently, it was possible to examine only a few strains at a time. Desai’s team, in contrast, analyzed 640 lines of yeast that had all evolved from a single parent cell. The approach allowed the team to statistically analyze evolution.
To efficiently analyze many strains of yeast simultaneously, scientists grow them on plates like this one, which has 96 individual wells. Sergey Kryazhimskiy
“This is the physicist’s approach to evolution, stripping down everything to the simplest possible conditions,” said Joshua Plotkin, an evolutionary biologist at the University of Pennsylvania who was not involved in the research but has worked with one of the authors. “They could partition how much of evolution is attributable to chance, how much to the starting point, and how much to measurement noise.” Desai’s plan was to track the yeast strains as they grew under identical conditions and then compare their final fitness levels, which were determined by how quickly they grew in comparison to their original ancestral strain. The team employed specially designed robot arms to transfer yeast colonies to a new home every 12 hours. The colonies that had grown the most in that period advanced to the next round, and the process repeated for 500 generations. Sergey Kryazhimskiy, a postdoctoral researcher in Desai’s lab, sometimes spent the night in the lab, analyzing the fitness of each of the 640 strains at three different points in time. The researchers could then compare how much fitness varied among strains, and find out whether a strain’s initial capabilities affected its final standing. They also sequenced the genomes of 104 of the strains to figure out whether early mutations changed the ultimate performance.
Fluid-handling robots like this one make it possible to study hundreds of lines of yeast over many generations. Courtesy of Sergey Kryazhimskiy
Previous studies have indicated that small changes early in the evolutionary journey can lead to big differences later on, an idea known as historical contingency. Long-term evolution studies in E. coli bacteria, for example, found that the microbes can sometimes evolve to eat a new type of food, but that such substantial changes only happen when certain enabling mutations happen first. These early mutations don’t have a big effect on their own, but they lay the necessary groundwork for later mutations that do. But because of the small scale of such studies, it wasn’t clear to Desai whether these cases were the exception or the rule. “Do you typically get big differences in evolutionary potential that arise in the natural course of evolution, or for the most part is evolution predictable?” he said. “To answer this we needed the large scale of our experiment.” As in previous studies, Desai found that early mutations influence future evolution, shaping the path the yeast takes. But in Desai’s experiment, that path didn’t affect the final destination. “This particular kind of contingency actually makes fitness evolution more predictable, not less,” Desai said.
Sidebar: Diminishing Returns Desai’s study isn’t the first to suggest that the law of diminishing returns applies to evolution. A famous decades-long experiment from Richard Lenski’s lab at Michigan State University, which has tracked E. coli for thousands of generations, found that fitness converged over time. But because of limitations in genomics technology in the 1990s, that study didn’t identify the mutations underlying those changes. “The 36 populations we had then would have been much more expensive to sequence than the hundred they did here,” said Michael Travisano of the University of Minnesota, who worked on the Michigan State study. More recently, two papers published in Science in 2011 mixed and matched a handful of beneficial mutations in different types of bacteria. When the researchers engineered those mutations into different strains of bacteria, they found that the fitter strains enjoyed a smaller benefit. Desai’s study examined a much broader combination of possible mutations, showing that the rule is much more general.
Desai found that just as a single trip to the gym benefits a couch potato more than an athlete, microbes that started off growing slowly gained a lot more from beneficial mutations than their fitter counterparts that shot out of the gate. “If you lag behind at the beginning because of bad luck, you’ll tend to do better in the future,” Desai said. He compares this phenomenon to the economic principle of diminishing returns — after a certain point, each added unit of effort helps less and less. Scientists don’t know why all genetic roads in yeast seem to arrive at the same endpoint, a question that Desai and others in the field find particularly intriguing. The yeast developed mutations in many different genes, and scientists found no obvious link among them, so it’s unclear how these genes interact in the cell, if they do at all. “Perhaps there is another layer of metabolism that no one has a handle on,” said Vaughn Cooper, a biologist at the University of New Hampshire who was not involved in the study. It’s also not yet clear whether Desai’s carefully controlled results are applicable to more complex organisms or to the chaotic real world, where both the organism and its environment are constantly changing. “In the real world, organisms get good at different things, partitioning the environment,” Travisano said. He predicts that populations within those ecological niches would still be subject to diminishing returns, particularly as they undergo adaptation. But it remains an open question, he said. Nevertheless, there are hints that complex organisms can also quickly evolve to become more alike. A study published in May analyzed groups of genetically distinct fruit flies as they adapted to a new environment. Despite traveling along different evolutionary trajectories, the groups developed similarities in attributes such as fecundity and body size after just 22 generations. “I think many people think about one gene for one trait, a deterministic way of evolution solving problems,” said David Reznick, a biologist at the University of California, Riverside. “This says that’s not true; you can evolve to be better suited to the environment in many ways.”
Biology has provided us with a lot of "quality control machinery" in the cell, most of which is dedicated to
making accurate copies of our DNA. Still, our genomes are remarkably
unstable. Mistakes are made, and some of them are enormous. "Entire paragraphs
and pages of our genetic text get duplicated or deleted. These large
mutations are called “copy number variants” or CNVs, and they add or
subtract copies of genes."
"Finding these mutations is only the beginning. Understanding why they cause particular effects is the next challenge."
Our genomes are a mess—and we’re only beginning to understand the societal costs behind such genetic uncertainty.
•
Intellectual disability and developmental delay disorders are surprisingly common, but they’re frustratingly mysterious and hard to categorize. Patients often show a baffling mix of symptoms that are sometimes subtle and sometimes severe. Why are developmental disorders so confusing? It turns out that there is a class of giant DNA mutations that share features of developmental disorders: They are surprisingly common, frustratingly diverse, and hard to categorize. Researchers are now discovering that these mutations play a big role in developmental delay disorders. The baffling symptoms are a consequence of the underlying genetic turmoil. Despite the tremendous amount of quality control machinery in the cell devoted to making accurate copies of our DNA, our genomes are surprisingly unstable. Mistakes are made, and not just small typos: Entire paragraphs and pages of our genetic text get duplicated or deleted. These large mutations are called “copy number variants” or CNVs, and they add or subtract copies of genes.
Finding these mutations is only the beginning. Understanding why they cause particular effects is the next challenge. Over the past decade, scientists have discovered CNVs to be shockingly common. One study found that we each carry, on average, about 1,000 CNVs, affecting roughly three percent of our genes. Different individuals have different CNVs, and so across the entire human population, much of the human genome is affected by these radical alterations. It’s hard to know what impact all of this has on our health. We’re all walking around with these mutations, and most of us are just fine. In fact, many CNVs have existed in the human population for a long time and are broadly shared; many are relatively benign. But other CNV mutations are very rare, or even unique, and researchers are discovering that these giant mutations have a big medical impact. In fact, as one researcher recently put it, the ability to find CNV mutations was “the most substantial clinical benefit to come directly from the Human Genome Project in the first decade of the twenty-first century.” Why? Because large DNA deletions or duplications explain many cases of developmental delay disorders. The most famous case is Down syndrome, which is caused by an entire extra chromosome. But there are many others disorders turn out to be due, in part, to CNVs, including autism spectrum disorders; more obscure ones like Angelman, DiGeorge, and Williams syndromes; as well as other uncategorized disorders. All together, intellectual disability and developmental delay affect about three percent of children. These disorders are costly to society and a huge challenge to the children and their families. Adding to the parents’ frustration is that they’re often unexplained: Doctors can’t always say what caused them, whether they’re likely to recur in siblings, or even how to treat them. THAT IS NOW CHANGING. Researchers have begun to discover how these confusingly diverse, frustratingly subtle, and surprisingly common disorders are often caused by CNV mutations that are themselves confusingly diverse, frustratingly subtle in their effects, and surprisingly common in the population. One team of researchers, led by Evan Eichler at the University of Washington, has been building a CNV “morbidity map” of developmental delay disorders. In a 2011 study, Eichler and his colleagues looked for rare, very large CNV mutations in nearly 16,000 children with developmental delay disorders and in 8,300 healthy subjects. While mutations certainly occurred in the healthy subjects—11 percent of them had relatively large mutations in their DNA—they were much more common in the children with developmental delay. The very largest mutations were almost 50 times more likely to occur in children with developmental delay than in the control subjects. Finding these mutations is only the beginning. Understanding why they cause particular effects is the next challenge. Because these mutations are so varied, and because they often affect multiple genes at once, it can be hard to figure out exactly what went wrong. To get at this question, Eichler and his colleagues completed an even larger study that included nearly 30,000 children with developmental delay. With so many patients, the researchers were able to find patterns among the mutations and symptoms that at first seemed to have little to do with each other. For example, the researchers found a group of patients whose various mutations had one thing in common: They damaged a gene called ZMYND11. Ordinarily, these patients wouldn’t be diagnosed with the same disorder: Some had severe intellectual disability, while others showed normal intelligence. But they all had some symptoms in common, including subtle facial deformities, delayed speech, and behavioral difficulties. The authors noted that one of the male patients had been very hard to categorize. He was diagnosed with “borderline personality disorder, bipolar disorder, psychosis, depression, low frustration tolerance leading to aggression and ADHD.” The genetic results show the underlying cause, and by relating his symptoms with other patients who carry ZMYBD11 mutations, they give his physicians a chance to find better ways to treat him. As geneticists dig into the seismic disruptions caused by CNVs, the confusing landscape of developmental disorders will begin to make more sense. But as one researcher wrote in a comment on Eichler’s study, as we learn more about these common mutations, we’ll find that many people lie in a gray area. They’ll carry mutations “for which the majority of carriers do not meet the criteria for any medical diagnosis or disability,” but which clearly cause problems in some people. This will be a challenge to society: “On the one hand, huge numbers of people might be stigmatized.” But this might also allow us to help people: “On the other hand, these CNVs might be contributing substantially to societal disability and disparity, and affected individuals might be precisely the group that could benefit from early supportive intervention.” Of course, this problem isn’t unique to CNVs—it’s the ever-present dilemma we continue to face as we learn to better understand human genetics.
Michael White is a systems biologist at the Department of Genetics and the Center for Genome Sciences and Systems Biology at the Washington University School of Medicine in St. Louis, where he studies how DNA encodes information for gene regulation. He co-founded the online science pub The Finch and Pea. Follow him on Twitter @genologos.
Evolutionary theory is still founded on the ideas of random mutations and natural selection. The problem is that very few (if any) mutations are random - epigenetics has changed so much of what we know to be true about how organisms evolve. [You might guess that I am on the "yes, evolutionary theory needs an upgrade" side of the debate.]
In this article from Nature, Kevin Laland and colleagues argue that yes, urgently; Gregory A. Wray, Hopi E. Hoekstra and colleagues argue the no, all is well perspective.
Does evolutionary theory need a rethink? Yes, urgently
Without an extended evolutionary framework, the theory neglects key processes, say Kevin Laland and colleagues. Charles Darwin conceived of evolution by natural selection without knowing that genes exist. Now mainstream evolutionary theory has come to focus almost exclusively on genetic inheritance and processes that change gene frequencies. Yet new data pouring out of adjacent fields are starting to undermine this narrow stance. An alternative vision of evolution is beginning to crystallize, in which the processes by which organisms grow and develop are recognized as causes of evolution. Some of us first met to discuss these advances six years ago. In the time since, as members of an interdisciplinary team, we have worked intensively to develop a broader framework, termed the extended evolutionary synthesis1 (EES), and to flesh out its structure, assumptions and predictions. In essence, this synthesis maintains that important drivers of evolution, ones that cannot be reduced to genes, must be woven into the very fabric of evolutionary theory. We believe that the EES will shed new light on how evolution works. We hold that organisms are constructed in development, not simply ‘programmed’ to develop by genes. Living things do not evolve to fit into pre-existing environments, but co-construct and coevolve with their environments, in the process changing the structure of ecosystems. The number of biologists calling for change in how evolution is conceptualized is growing rapidly. Strong support comes from allied disciplines, particularly developmental biology, but also genomics, epigenetics, ecology and social science1, 2. We contend that evolutionary biology needs revision if it is to benefit fully from these other disciplines. The data supporting our position gets stronger every day. Yet the mere mention of the EES often evokes an emotional, even hostile, reaction among evolutionary biologists. Too often, vital discussions descend into acrimony, with accusations of muddle or misrepresentation. Perhaps haunted by the spectre of intelligent design, evolutionary biologists wish to show a united front to those hostile to science. Some might fear that they will receive less funding and recognition if outsiders — such as physiologists or developmental biologists — flood into their field. However, another factor is more important: many conventional evolutionary biologists study the processes that we claim are neglected, but they comprehend them very differently (see ‘No, all is well’). This is no storm in an academic tearoom, it is a struggle for the very soul of the discipline. Here we articulate the logic of the EES in the hope of taking some heat out of this debate and encouraging open discussion of the fundamental causes of evolutionary change (see Supplementary Information). Core values The core of current evolutionary theory was forged in the 1930s and 1940s. It combined natural selection, genetics and other fields into a consensus about how evolution occurs. This ‘modern synthesis’ allowed the evolutionary process to be described mathematically as frequencies of genetic variants in a population change over time — as, for instance, in the spread of genetic resistance to the myxoma virus in rabbits. In the decades since, evolutionary biology has incorporated developments consistent with the tenets of the modern synthesis. One such is ‘neutral theory’, which emphasizes random events in evolution. However, standard evolutionary theory (SET) largely retains the same assumptions as the original modern synthesis, which continues to channel how people think about evolution.
Orange: Peter Chadwick/SPL; Blue: Lawrence Lawry/SPL | Plasticity: commodore butterflies emerge with different colours in dry (left) and wet seasons.
The story that SET tells is simple: new variation arises through random genetic mutation; inheritance occurs through DNA; and natural selection is the sole cause of adaptation, the process by which organisms become well-suited to their environments. In this view, the complexity of biological development — the changes that occur as an organism grows and ages — are of secondary, even minor, importance. In our view, this ‘gene-centric’ focus fails to capture the full gamut of processes that direct evolution. Missing pieces include how physical development influences the generation of variation (developmental bias); how the environment directly shapes organisms’ traits (plasticity); how organisms modify environments (niche construction); and how organisms transmit more than genes across generations (extra-genetic inheritance). For SET, these phenomena are just outcomes of evolution. For the EES, they are also causes. Valuable insight into the causes of adaptation and the appearance of new traits comes from the field of evolutionary developmental biology (‘evo-devo’). Some of its experimental findings are proving tricky to assimilate into SET. Particularly thorny is the observation that much variation is not random because developmental processes generate certain forms more readily than others3. For example, among one group of centipedes, each of the more than 1,000 species has an odd number of leg-bearing segments, because of the mechanisms of segment development3. In our view, this concept — developmental bias — helps to explain how organisms adapt to their environments and diversify into many different species. For example, cichlid fishes in Lake Malawi are more closely related to other cichlids in Lake Malawi than to those in Lake Tanganyika, but species in both lakes have strikingly similar body shapes4. In each case, some fish have large fleshy lips, others protruding foreheads, and still others short, robust lower jaws. SET explains such parallels as convergent evolution: similar environmental conditions select for random genetic variation with equivalent results. This account requires extraordinary coincidence to explain the multiple parallel forms that evolved independently in each lake. A more succinct hypothesis is that developmental bias and natural selection work together4, 5. Rather than selection being free to traverse across any physical possibility, it is guided along specific routes opened up by the processes of development5, 6.
“There is more to inheritance than genes.”
Another kind of developmental bias occurs when individuals respond to their environment by changing their form — a phenomenon called plasticity. For instance, leaf shape changes with soil water and chemistry. SET views this plasticity as merely fine-tuning, or even noise. The EES sees it as a plausible first step in adaptive evolution. The key finding here is that plasticity not only allows organisms to cope in new environmental conditions but to generate traits that are well-suited to them. If selection preserves genetic variants that respond effectively when conditions change, then adaptation largely occurs by accumulation of genetic variations that stabilize a trait after its first appearance5, 6. In other words, often it is the trait that comes first; genes that cement it follow, sometimes several generations later5. Studies of fish, birds, amphibians and insects suggest that adaptations that were, initially, environmentally induced may promote colonization of new environments and facilitate speciation5, 6. Some of the best-studied examples of this are in fishes, such as sticklebacks and Arctic char. Differences in the diets and conditions of fish living at the bottom and in open water have induced distinct body forms, which seem to be evolving reproductive isolation, a stage in forming new species. The number of species in a lineage does not depend solely on how random genetic variation is winnowed through different environmental sieves. It also hangs on developmental properties that contribute to the lineage’s ‘evolvability’. In essence, SET treats the environment as a ‘background condition’, which may trigger or modify selection, but is not itself part of the evolutionary process. It does not differentiate between how termites become adapted to mounds that they construct and, say, how organisms adapt to volcanic eruptions. We view these cases as fundamentally different7. Volcanic eruptions are idiosyncratic events, independent of organisms’ actions. By contrast, termites construct and regulate their homes in a repeatable, directional manner that is shaped by past selection and that instigates future selection. Similarly, mammals, birds and insects defend, maintain and improve their nests — adaptive responses to nest building that have evolved again and again7. This ‘niche construction’, like developmental bias, means that organisms co-direct their own evolution by systematically changing environments and thereby biasing selection7. Inheritance beyond genes SET has long regarded inheritance mechanisms outside genes as special cases; human culture being the prime example. The EES explicitly recognizes that parent–offspring similarities result in part from parents reconstructing their own developmental environments for their offspring. ‘Extra-genetic inheritance’ includes the transmission of epigenetic marks (chemical changes that alter DNA expression but not the underlying sequence) that influence fertility, longevity and disease resistance across taxa8. In addition, extra-genetic inheritance includes socially transmitted behaviour in animals, such as nut cracking in chimpanzees or the migratory patterns of reef fishes8, 9. It also encompasses those structures and altered conditions that organisms leave to their descendants through their niche construction — from beavers’ dams to worm-processed soils7, 10. Research over the past decade has established such inheritance to be so widespread that it should be part of general theory. Mathematical models of evolutionary dynamics that incorporate extra-genetic inheritance make different predictions from those that do not7–9. Inclusive models help to explain a wide range of puzzling phenomena, such as the rapid colonization of North America by the house finch, the adaptive potential of invasive plants with low genetic diversity, and how reproductive isolation is established.
Such legacies can even generate macro-evolutionary patterns. For instance, evidence suggests that sponges oxygenated the ocean and by doing so created opportunities for other organisms to live on the seabed10. Accumulating fossil data indicate that inherited modifications of the environment by species has repeatedly facilitated, sometimes after millions of years, the evolution of new species and ecosystems10. Better together The above insights derive from different fields, but fit together with surprising coherence. They show that variation is not random, that there is more to inheritance than genes, and that there are multiple routes to the fit between organisms and environments. Importantly, they demonstrate that development is a direct cause of why and how adaptation and speciation occur, and of the rates and patterns of evolutionary change. SET consistently frames these phenomena in a way that undermines their significance. For instance, developmental bias is generally taken to impose ‘constraints’ on what selection can achieve — a hindrance that explains only the absence of adaptation. By contrast, the EES recognizes developmental processes as a creative element, demarcating which forms and features evolve, and hence accounting for why organisms possess the characters that they do. Researchers in fields from physiology and ecology to anthropology are running up against the limiting assumptions of the standard evolutionary framework without realizing that others are doing the same. We believe that a plurality of perspectives in science encourages development of alternative hypotheses, and stimulates empirical work. No longer a protest movement, the EES is now a credible framework inspiring useful work by bringing diverse researchers under one theoretical roof to effect conceptual change in evolutionary biology.
*******
Does evolutionary theory need a rethink? No, all is well
Theory accommodates evidence through relentless synthesis, say Gregory A. Wray, Hopi E. Hoekstra and colleagues. In October 1881, just six months before he died, Charles Darwin published his final book. The Formation of Vegetable Mould, Through the Actions of Worms11 sold briskly: Darwin’s earlier publications had secured his reputation. He devoted an entire book to these humble creatures in part because they exemplify an interesting feedback process: earthworms are adapted to thrive in an environment that they modify through their own activities. Darwin learned about earthworms from conversations with gardeners and his own simple experiments. He had a genius for distilling penetrating insights about evolutionary processes — often after amassing years of observational and experimental data — and he drew on such disparate topics as agriculture, geology, embryology and behaviour. Evolutionary thinking ever since has followed Darwin’s lead in its emphasis on evidence and in synthesizing information from other fields. A profound shift in evolutionary thinking began during the 1920s, when a handful of statisticians and geneticists began quietly laying the foundations for a dramatic transformation. Their work between 1936 and 1947 culminated in the ‘modern synthesis’, which united Darwin’s concept of natural selection with the nascent field of genetics and, to a lesser extent, palaeontology and systematics. Most importantly, it laid the theoretical foundations for a quantitative and rigorous understanding of adaptation and speciation, two of the most fundamental evolutionary processes.
John van Wyhe/Darwin-online.org.uk
A worm cast pictured in Charles Darwin’s final book.
In the decades since, generations of evolutionary biologists have modified, corrected and extended the framework of the modern synthesis in countless ways. Like Darwin, they have drawn heavily from other fields. When molecular biologists identified DNA as the material basis for heredity and trait variation, for instance, their discoveries catalysed fundamental extensions to evolutionary theory. For example, the realization that many genetic changes have no fitness consequences led to major theoretical advances in population genetics. The discovery of ‘selfish’ DNA prompted discussions about selection at the level of genes rather than traits. Kin selection theory, which describes how traits affecting relatives are selected, represents another extension12. Nonetheless there are evolutionary biologists (see ‘Yes, urgently’) who argue that theory has since ossified around genetic concepts. More specifically, they contend that four phenomena are important evolutionary processes: phenotypic plasticity, niche construction, inclusive inheritance and developmental bias. We could not agree more. We study them ourselves. But we do not think that these processes deserve such special attention as to merit a new name such as ‘extended evolutionary synthesis’. Below we outline three reasons why we believe that these topics already receive their due in current evolutionary theory. New words, old concepts The evolutionary phenomena championed by Laland and colleagues are already well integrated into evolutionary biology, where they have long provided useful insights. Indeed, all of these concepts date back to Darwin himself, as exemplified by his analysis of the feedback that occurred as earthworms became adapted to their life in soil. Today we call such a process niche construction, but the new name does not alter the fact that evolutionary biologists have been studying feedback between organisms and the environment for well over a century13. Stunning adaptations such as termite mounds, beaver dams, and bowerbird displays have long been a staple of evolutionary studies. No less spectacular are cases that can only be appreciated at the microscopic or molecular scale, such as viruses that hijack host cells to reproduce and ‘quorum sensing’, a sort of group think by bacteria. Another process, phenotypic plasticity, has drawn considerable attention from evolutionary biologists. Countless cases in which the environment influences trait variation have been documented — from the jaws of cichlid fishes that change shape when food sources alter, to leaf-mimicking insects that are brown if born in the dry season and green in the wet. Technological advances in the past decade have revealed an incredible degree of plasticity in gene expression in response to diverse environmental conditions, opening the door to understanding its material basis. Much discussed, too, was a book5 by behavioural scientist Mary Jane West-Eberhard that explored how plasticity might precede genetic changes during adaptation. So, none of the phenomena championed by Laland and colleagues are neglected in evolutionary biology. Like all ideas, however, they need to prove their value in the marketplace of rigorous theory, empirical results and critical discussion. The prominence that these four phenomena command in the discourse of contemporary evolutionary theory reflects their proven explanatory power, not a lack of attention. Modern expansion Furthermore, the phenomena that interest Laland and colleagues are just four among many that offer promise for future advances in evolutionary biology. Most evolutionary biologists have a list of topics that they would like to see given more attention. Some would argue that epistasis — complex interactions among genetic variants — has long been under-appreciated. Others would advocate for cryptic genetic variation (mutations that affect only traits under specific genetic or environmental conditions). Still others would stress the importance of extinction, or adaptation to climate change, or the evolution of behaviour. The list goes on. We could stop and argue about whether ‘enough’ attention is being paid to any of these. Or we could roll up our sleeves, get to work, and find out by laying the theoretical foundations and building a solid casebook of empirical studies. Advocacy can take an idea only so far. What Laland and colleagues term the standard evolutionary theory is a caricature that views the field as static and monolithic. They see today’s evolutionary biologists as unwilling to consider ideas that challenge convention. We see a very different world. We consider ourselves fortunate to live and work in the most exciting, inclusive and progressive period of evolutionary research since the modern synthesis. Far from being stuck in the past, current evolutionary theory is vibrantly creative and rapidly growing in scope. Evolutionary biologists today draw inspiration from fields as diverse as genomics, medicine, ecology, artificial intelligence and robotics. We think Darwin would approve. Genes are central Finally, diluting what Laland and colleagues deride as a ‘gene-centric’ view would de-emphasize the most powerfully predictive, broadly applicable and empirically validated component of evolutionary theory. Changes in the hereditary material are an essential part of adaptation and speciation. The precise genetic basis for countless adaptations has been documented in detail, ranging from antibiotic resistance in bacteria to camouflage coloration in deer mice, to lactose tolerance in humans. Although genetic changes are required for adaptation, non-genetic processes can sometimes play a part in how organisms evolve. Laland and colleagues are correct that phenotypic plasticity, for instance, may contribute to the adaptedness of an individual. A seedling might bend towards brighter light, growing into a tree with a different shape from its siblings’. Many studies have shown that this kind of plasticity is beneficial, and that it can readily evolve if there is genetic variation in the response14. This role for plasticity in evolutionary change is so well documented that there is no need for special advocacy.
“What matters is the heritable differences in traits, especially those that bestow some selective advantage.”
Much less clear is whether plasticity can ‘lead’ genetic variation during adaptation. More than half a century ago, developmental biologist Conrad Waddington described a process that he called genetic assimilation15. Here, new mutations can sometimes convert a plastic trait into one that develops even without the specific environmental condition that originally induced it. Few cases have been documented outside of the laboratory, however. Whether this is owing to a lack of serious attention or whether it reflects a genuine rarity in nature can be answered only by further study. Lack of evidence also makes it difficult to evaluate the role that developmental bias may have in the evolution (or lack of evolution) of adaptive traits. Developmental processes, based on features of the genome that may be specific to a particular group of organisms, certainly can influence the range of traits that natural selection can act on. However, what matters ultimately is not the extent of trait variation, nor even its precise mechanistic causes. What matters is the heritable differences in traits, especially those that bestow some selective advantage. Likewise, there is little evidence for the role of inherited epigenetic modification (part of what was termed ‘inclusive inheritance’) in adaptation: we know of no case in which a new trait has been shown to have a strictly epigenetic basis divorced from gene sequence. On both topics, further research will be valuable. All four phenomena that Laland and colleagues promote are ‘add-ons’ to the basic processes that produce evolutionary change: natural selection, drift, mutation, recombination and gene flow. None of these additions is essential for evolution, but they can alter the process under certain circumstances. For this reason they are eminently worthy of study. We invite Laland and colleagues to join us in a more expansive extension, rather than imagining divisions that do not exist. We appreciate their ideas as an important part of what evolutionary theory might become in the future. We, too, want an extended evolutionary synthesis, but for us, these words are lowercase because this is how our field has always advanced16. The best way to elevate the prominence of genuinely interesting phenomena such as phenotypic plasticity, inclusive inheritance, niche construction and developmental bias (and many, many others) is to strengthen the evidence for their importance. Before claiming that earthworms “have played a more important part in the history of the world than most persons would at first suppose”11, Darwin collected more than 40 years of data. Even then, he published only for fear that he would soon be “joining them”17.
Odling-Smee, F. J., Laland, K. N. & Feldman, M. W.Niche Construction: The Neglected Process in Evolution (Princeton Univ. Press, 2003).Show context
Jablonka, E. & Lamb, M.Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life (MIT Press, 2014).Show context
Hoppitt, W. & Laland, K. N.Social Learning: An Introduction to Mechanisms, Methods, and Models (Princeton Univ. Press, 2013).Show context
Erwin, D. H. & Valentine J. W.The Cambrian Explosion: The Construction of Animal Biodiversity (Roberts, 2013).Show context
Darwin, C.The Formation of Vegetable Mould, Through the Actions of Worms (John Murray, 1881).Show context
Alcock, J.The Triumph of Sociobiology (Oxford Univ. Press, 2001).Show context
Bailey, N. W.Trends Ecol. Evol.27, 561–569 (2012).