How Can the Study of Complexity Transform Our Understanding of the World?
By Melanie Mitchell
January 20, 2014
image: Stephen Hopkins
In 1894, the physicist and Nobel laureate Albert Michelson declared that science was almost finished; the human race was within a hair’s breadth of understanding everything:
It seems probable that most of the grand underlying principles have now been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice.
Bold and heady predictions like this often seem destined to topple, and, to be sure, the world of physics was soon shaken by the revolutions of relativity and quantum mechanics.
But as the 20th century unfolded, it turned out to be the phenomena closest to our own human scale— biology, social science, economics, politics, among others—that have most notably eluded explanation by any grand principles. The deeper we dig into the workings of ourselves and our society, the more unexpected complexity we find. Fittingly, it was in the 20th century that science began to bridge disciplinary boundaries in order to search for principles of complexity itself.
What is Complexity?
The “study of complexity” refers to the attempt to find common principles underlying the behavior of complex systems—systems in which large collections of components interact in nonlinear ways. Here, the term nonlinear implies that the system can’t be understood simply by understanding its individual components; nonlinear interactions cause the whole to be “more than the sum of its parts.”
Complex systems scientists try to understand how such collective sophistication can come about, whether it be in ant colonies, cells, brains, immune systems, social groups, or economic markets. People who study complexity are intrigued by the suggestive similarities among these disparate systems. All these systems exhibit self-organization: the system’s components organize themselves to act as a coherent whole without the benefit of any central or outside “controller”. Complex systems are able to encode and process information with a sophistication that is not available to the individual components. Complex systems evolve—they are continually changing in an open-ended way, and they learn and adapt over time. Such systems defy precise prediction, and resist the kind of equilibrium that would make them easier for scientists to understand.
Transforming Our Understanding
Of course all important scientific discoveries transform our understanding of nature, but I think that the study of complexity goes a step further: it not only helps us understand important phenomena, but changes our perspective on how to think about nature, and about science itself.
Here are a few examples of the surprising, perspective-changing discoveries of Complex Systems science. (If these don’t seem so surprising to you, it is because your perspective has already been changed by the sciences of complexity!)
Simple rules can yield complex, unpredictable behavior
Why can’t we seem to forecast the weather farther out than a week or so? Why is it so hard to project yearly variation in fishery populations? Why can’t we foresee stock market bubbles and crashes? In the past it was widely assumed that such phenomena are hard to predict because the underlying processes are highly complex, and that random factors must play a key role. However, Complex Systems science—especially the study of dynamics and chaos—have shown that complex behavior and unpredictability can arise in a system even if the underlying rules are extremely simple and completely deterministic. Often, the key to complexity is the iteration over time of simple, though nonlinear, interaction rules among the system’s components. It’s still not clear if unpredictability in the weather, stock market, and animal populations is caused by such iteration alone, but the study of chaos has shown that it’s possible.
More is Different
Above I reiterated the old saw, “the whole is more than the sum of its parts”. The physicist Phil Anderson coined a better aphorism: he noted that a key lesson of complexity is that “more is different”.
Ant colonies are a great example of this. As the ecologist Nigel Franks puts it, “The solitary army ant is behaviorally one of the least sophisticated animals imaginable...If 100 army ants are placed on a flat surface, they will walk around and around in never decreasing circles until they die of exhaustion.” Yet put half a million of them together and the group as a whole behaves as a hard-to-predict “superorganism” with sophisticated, and sometimes frightening, “collective intelligence”. More is different.
Similar stories can be told for neurons in the brain, cells in the immune system, creativity and social movements in cities, and agents in market economies. The study of complexity has shown that when a system’s components have the right kind of interactions, its global behavior—the system’s capacity to process information, to make decisions, to evolve and learn—can be powerfully different from that of its individual components.
Network Thinking
In the early 2000s, the complete human genome was sequenced. While the benefits to science were enormous, some of the predictions made by prominent scientists and others had a Michelsonian flavor (see first paragraph). President Clinton echoed the widely held view that the Human Genome Project would “revolutionize the diagnosis, prevention and treatment of most, if not all, human diseases.” Indeed, many scientists believed that a complete mapping of human genes would provide a nearly complete understanding of how genetics worked, which genes were responsible for which traits, and this would guide the way for revolutionary medical discoveries and targeted gene therapies.
Now, more than a decade later, these predicted medical revolutions have not yet materialized. But the Human Genome Project, and the huge progress in genetics research that followed, did uncover some unexpected results. First, human genes (DNA sequences that code for proteins) number around 21,000—much fewer than anyone thought, and about the same number as in mice, worms, and mustard plants. Second, these protein-coding genes make up only about 2% of our DNA. Two mysteries emerge: If we humans have comparatively so few genes, where does our complexity come from? And as for that 98% of non-gene DNA, which in the past was dismissively called "junk DNA", what is its function?
What geneticists have learned is that genetic elements in a cell, like ants in a colony, interact nonlinearly so as to create intricate information-processing networks. It is the networks, rather than the individual genes, that shape the organism. Moreover, and most surprising: the so-called “junk” DNA is key to forming these networks. As biologist John Mattick puts it, “The irony...is that what was dismissed as junk because it wasn’t understood will turn out to hold the secret of human complexity.”
Information-processing networks are emerging as a core organizing principle of biology. What used to be called “cellular signaling pathways” are now “cellular information processing networks.” New research on cancer treatments is focused not on individual genes but on disrupting the cellular information processing networks that many cancers exploit. Some types of bacteria are now known to communicate via “quorum sensing” networks in order to collectively attack a host; this discovery is also driving research into network-specific treatment of infections.
Over the last two decades an interdisciplinary science of networks has emerged, and has developed insights and research methods that apply to networks ranging from genetics to economics. Network thinking is the area of complex systems that has perhaps done the most to transform our understanding of the world.
Non-Normal is the New Normal
In 2009, Nobel Prize-winning economist Paul Krugman said, “Few economists saw our current crisis coming, but this predictive failure was the least of the field’s problems. More important was the profession’s blindness to the very possibility of catastrophic failures in a market economy.” At least part of this “blindness” was due to the reliance on risk models based on so-called normal distributions.
Figure 1: (a) A hypothetical normal distribution of the probability of financial gain or loss under trading. (b) A hypothetical long-tailed distribution, showing only the loss side. The “tail” of the distribution is the far right-hand side. The long-tailed distribution predicts a considerably higher probability of catastrophic loss than the normal distribution.The term normal distribution refers to the familiar bell curve. Economists and finance professionals often use such distributions to model the probability of gains and risk of losses from investments. Figure 1(a) shows a hypothetical normal distribution of risk. I’ve marked a hypothetical “catastrophic loss” on the graph. You can see that, given this distribution of risk, the probability of such a loss would be very near zero. Less probable, maybe, than a lightning strike right where you’re standing. Something you don’t have to worry about. Unless the model is wrong.
The study of complexity has shown that in nonlinear, highly networked systems, a more accurate estimation of risk would be a so-called “long-tailed” distribution. Figure 1(b) shows a hypothetical long-tailed distribution of risk (here, only the “loss” side is shown). The longer non-zero “tail” (far right-hand side) of this distribution shows that the probability of a catastrophic loss is significantly higher than for a system obeying a normal distribution. If risk models in 2008 had employed long-tailed rather than normal distributions, the possibility of an “extreme event”—here, “catastrophic loss”—would have be judged more likely.
Because long-tailed distributions are now known to be signatures of complex networks, our growing understanding of such networks implies that risk models need to be rethought in many areas, ranging from disease epidemics to power grid failures; from financial crises to ecosystem collapses. The technologist Andreas Antonopoulos puts it succinctly: “The threat is complexity itself”.
Is Complexity a New Science?
“The new science of complexity” has become a catchphrase in some circles. Google reports nearly 87,000 hits on this phrase. But how “new” is the study of complexity? And to what extent is it actually a “science”?
The current scientific efforts centered around complexity have several antecedents. The Cybernetics movement of the 1940s and 50s, the General System Theory movement of the 1960s, and the more recent advent of Systems Biology, Systems Engineering, Systems Science, etc., all share goals with Complex Systems science: finding general principles that explain how system-level behavior emerges from interactions among lower-level components. The different movements capture different (though sometimes overlapping) communities and different foci of attention.
To my mind, Complexity refers not to a single science but rather to a community of scientists in different disciplines who share interdisciplinary interests, methodologies, and a mindset about how to address scientific problems. Just what this mindset consists of is hard to pin down. I would say it includes, first, the assumption that understanding complexity will require integrating concepts from dynamics, information, statistical physics, and evolution. And second, that computer modeling is an essential addition to traditional scientific theory and experimentation. As yet, Complexity is not a single unified science; rather, to paraphrase William James, it is still “the hope of a science”. I believe that this hope has great promise.
In our era of Big Data, what Complexity potentially offers is “Big Theory”—a scientific understanding of the complex processes that produce the data we are drowning in. If the field’s past contributions are any indication, Complexity’s sought-after big theory will even more profoundly transform our understanding of the world.
It’s something to look forward to. In the words of playwright Tom Stoppard: “It’s the best possible time to be alive, when almost everything you thought you knew is wrong.”
Discussion Questions
1. Can you identify any ways in which your own way of thinking has been changed by Complex Systems science?
2. The discussion above stated that when systems get too intricately networked, “the threat is complexity itself”. The network scientist Duncan Watts suggested that the notion “too big to fail” should be rethought as “too complex to exist.” Should we worry about the world becoming too complex? If so, what should we do about it?
3. To what extent do you think the ideas of complex systems are new? What would it take to create a unified science of complexity?
Resources and Further Reading:
http://complexityexplorer.org
- Anderson, P. W. More is different. Science, 177 (4047), 1972, 393-396.
- Bettencourt, L. M., Lobo, J., Helbing, D., Kühnert, C., & West, G. B. (2007). Growth, innovation, scaling, and the pace of life in cities. Proceedings of the National Academy of Sciences, 104(17), 7301-7306.
- Franks, N. R. Army ants: A collective intelligence. American Scientist, 77(2), 1989, 138-145.
- Hayden, E. C. Human genome at ten: Life is complicated. Nature, 464, 2010, 664-667.
- Krugman, P. How did economists get it so wrong? New York Times, September 2, 2009.
- Miller, J. H. and Page, S. E. Complex Adaptive Systems. Princeton University Press, 2007.
- Mitchell, M. Complexity: A Guided Tour. Oxford University Press, 2009
- Newman, M. E. J. Networks: An Introduction. Oxford University Press, 2009.
- Watts, D. Too complex to exist. Boston Globe, June 14, 2009.
- West, G. Big data needs a big theory to go with it. Scientific American, May 15, 2013.
Offering multiple perspectives from many fields of human inquiry that may move all of us toward a more integrated understanding of who we are as conscious beings.
Pages
▼
Tuesday, January 28, 2014
Melanie Mitchell - How Can the Study of Complexity Transform Our Understanding of the World?
From Big Questions Online, Melanie Mitchell (Professor of Computer Science at Portland State University, and External Professor and Member of the Science Board at the Santa Fe Institute) offers a nice and very accessible overview of how the study of complex systems can help us better make sense of our world.
No comments:
Post a Comment