For anyone interested in a good overview of Kauffman's theories, this is an excellent introduction. These are a couple of his books:
The following material is posted at the Edge bio page for Kauffman."THE ADJACENT POSSIBLE" [11.3.03] A Talk with Stuart Kauffman
I had the privilege of working with Stuart while I was collaborating with the Centre for Business Innovation at Ernst and Young and he was chairing BIOS, a group of consulting scientists who were building autonomous agent simulations at the time for major organizations. We brought him into several projects as an advisor and I found his inputs powerfully transformative- what a beautiful mind he has! Stuart's book "At Home in the Universe" is a must read for anyone interested in evolution and emergence.
An autonomous agent is something that can both reproduce itself and do at least one thermodynamic work cycle. It turns out that this is true of all free-living cells, excepting weird special cases. They all do work cycles, just like the bacterium spinning its flagellum as it swims up the glucose gradient. The cells in your body are busy doing work cycles all the time.
Introduction
Stuart Kauffman is a theoretical biologist who studies the origin of life and the origins of molecular organization. Thirty- five years ago, he developed the Kauffman models, which are random networks exhibiting a kind of self-organization that he terms "order for free." Kauffman is not easy. His models are rigorous, mathematical, and, to many of his colleagues, somewhat difficult to understand. A key to his worldview is the notion that convergent rather than divergent flow plays the deciding role in the evolution of life. He believes that the complex systems best able to adapt are those poised on the border between chaos and disorder.
Kauffman asks a question that goes beyond those asked by other evolutionary theorists: if selection is operating all the time, how do we build a theory that combines self-organization (order for free) and selection? The answer lies in a "new" biology, somewhat similar to that proposed by Brian Goodwin, in which natural selection is married to structuralism.
Lately, Kauffman says that he has been "hamstrung by the fact that I don't see how you can see ahead of time what the variables will be. You begin science by stating the configuration space. You know the variables, you know the laws, you know the forces, and the whole question is, how does the thing work in that space? If you can't see ahead of time what the variables are, the microscopic variables for example for the biosphere, how do you get started on the job of an integrated theory? I don't know how to do that. I understand what the paleontologists do, but they're dealing with the past. How do we get started on something where we could talk about the future of a biosphere?"
"There is a chance that there are general laws. I've thought about four of them. One of them says that autonomous agents have to live the most complex game that they can. The second has to do with the construction of ecosystems. The third has to do with Per Bak's self-organized criticality in ecosystems. And the fourth concerns the idea of the adjacent possible. It just may be the case that biospheres on average keep expanding into the adjacent possible. By doing so they increase the diversity of what can happen next. It may be that biospheres, as a secular trend, maximize the rate of exploration of the adjacent possible. If they did it too fast, they would destroy their own internal organization, so there may be internal gating mechanisms. This is why I call this an average secular trend, since they explore the adjacent possible as fast as they can get away with it. There's a lot of neat science to be done to unpack that, and I'm thinking about it."
—JB
STUART A. KAUFFMAN, a theoretical biologist, is emeritus professor of biochemistry at the University of Pennsylvania, a MacArthur Fellow and an external professor at the Santa Fe Institute. Dr. Kauffman was the founding general partner and chief scientific officer of The Bios Group, a company (acquired in 2003 by NuTech Solutions) that applies the science of complexity to business management problems. He is the author of The Origins of Order, Investigations, and At Home in the Universe: The Search for the Laws of Self-Organization.
Stuart Kauffman's Edge Bio Page
"THE ADJACENT POSSIBLE"
(STUART KAUFFMAN): In his famous book, What is Life?, Erwin Schrödinger asks, "What is the source of the order in biology?" He arrives at the idea that it depends upon quantum mechanics and a microcode carried in some sort of aperiodic crystal—which turned out to be DNA and RNA—so he is brilliantly right. But if you ask if he got to the essence of what makes something alive, it's clear that he didn't. Although today we know bits and pieces about the machinery of cells, we don't know what makes them living things. However, it is possible that I've stumbled upon a definition of what it means for something to be alive.
For the better part of a year and a half, I've been keeping a notebook about what I call autonomous agents. An autonomous agent is something that can act on its own behalf in an environment. Indeed, all free-living organisms are autonomous agents. Normally, when we think about a bacterium swimming upstream in a glucose gradient we say that the bacterium is going to get food. That is to say, we talk about the bacterium teleologically, as if it were acting on its own behalf in an environment. It is stunning that the universe has brought about things that can act in this way. How in the world has that happened?
As I thought about this, I noted that the bacterium is just a physical system; it's just a bunch of molecules that hang together and do things to one another. So, I wondered, what characteristics are necessary for a physical system to be an autonomous agent? After thinking about this for a number of months I came up with a tentative definition.
My definition is that an autonomous agent is something that can both reproduce itself and do at least one thermodynamic work cycle. It turns out that this is true of all free-living cells, excepting weird special cases. They all do work cycles, just like the bacterium spinning its flagellum as it swims up the glucose gradient. The cells in your body are busy doing work cycles all the time.
Definitions are neither true nor false; they're useful or useless. We can only find out if a definition is useful by trying to apply it to organisms, conceptual issues, and experimental issues. Hopefully, it turns out to be interesting.
Once I had this definition, my next step was to create and write about a hypothetical chemical autonomous agent. It turns out to be an open thermodynamic chemical system that is able to reproduce itself and, in doing so, performs a thermodynamic work cycle. I had to learn about work cycles, but it's just a new class of chemical reaction networks that nobody's ever looked at before. People have made self-reproducing molecular systems and molecular motors, but nobody's ever put the two together into a single system that is capable of both reproduction and doing a work cycle.
Imagine that inside the cell are two kinds of molecules—A and B—that can undergo three different reactions. A and B can make C and D, they can make E, or they can make F and G. There are three different reaction pathways, each of which has potential barriers along the reaction coordinate. Once the cells make the membrane, A and B can partition into the membrane, changing their rotational, vibrational, and translational motion. That, in turn, changes the shape of the potential barrier and walls. Changing the heights of the potential barrier is precisely the manipulation of constraints. Thus, cells do thermodynamic work to build a structure called the membrane, which in turn manipulates constraints on reactions, meaning that cells do work at constructing constraints that manipulate constraints.
In addition, the cell does thermodynamic work to build an enzyme by linking amino acids together. It binds to the transition state that carries A and B to C and D—not to E or F and G—so it catalyzes that specific reaction, causing energies to reach down a specific pathway within a small number of degrees of freedom. You make C and D, but you don't make E and F and G. D may go over and attach to a trans-membrane channel and give up some of its vibrational energy that popped the membrane open and allow in an ion, which then does something further in the cell. So cells do work to construct constraints, which then cause the release of energy in specific ways so that work is done. That work then propagates, which is fascinating.
As I proceed here there are several points to keep in mind. One is that you cannot do a work cycle at equilibrium, meaning that the concept of an autonomous agent is inherently a non equilibrium concept.
A second is that once this concept is developed it's only going to be a matter of perhaps 10, 15, or 20 years until, somewhere in the marriage between biology and nanotechnology, we will make autonomous agents that will create chemical systems that reproduce themselves and do work cycles. This means that we have a technological revolution on our hands, because autonomous agents don't just sit and talk and pass information around. They can actually build things.
The third thing is that this may be an adequate definition of life. In the next 30 to 50 years we are either going to make a novel life form or we will find one—on Mars, Titan, or somewhere else. I hope that what we find is radically different than life on Earth because it will open up two major questions. First, what would it be like to have a general biology, a biology free from the constraints of terrestrial biology? And second, are there laws that govern biospheres anywhere in the universe? I'd like to think that there are such laws. Of course, we don't know that there are—we don't even know that there are such laws for the Earth's biosphere—but I have three or four candidate laws that I struggle with.
All of this points to the need for a theory of organization, and we can start to think about such a theory by critiquing the concept of work. If you ask a physicist what work is he'll say that it's force acting through a distance. When you strike a hockey puck, for example, the more you accelerate it, the more little increments of force you've applied to it. The integral of that figure divided by the distance the puck has traveled is the work that you've done. The result is just a number.
In any specific case of work, however, there's an organization to the process. The description of the organization of the process that allows work to happen is missing from its numerical representation. In his book on the second law, Peter Atkins gives a definition of work that I find congenial. He says that work itself is a thing—the constrained release of energy. Think of a cylinder and a piston in an old steam engine. The steam pushes down on the piston, and it converts the randomness of the steam inside the head of the cylinder into the rectilinear motion of the piston down the cylinder. In this process, many degrees of freedom are translated into a few.
The puzzle becomes apparent when we ask some new questions. What are the constraints? Obviously the constraints are the cylinder and the piston, the fact that the piston is inside the cylinder, the fact that there's some grease between the piston and the cylinder so the steam can't escape and some rods attached to the piston. But where did the constraints come from? In virtually every case it takes work to make constraints. Somebody had to make the cylinder, somebody had to make the piston, and somebody had to assemble them.
That it takes work to make constraints and it takes constraints to make work is a very interesting cycle. This idea is nowhere to be found in our definition of work, but it's physically correct in most cases, and certainly in organisms. This means that we are lacking theory and points towards the importance of the organization of process.
The life cycle of a cell is simply amazing. It does work to construct constraints on the release of energy, which does work to construct more constraints on the release of energy, which does work to construct even more constraints on the release of energy, and other kinds of work as well. It builds structure. Cells don't just carry information. They actually build things until something astonishing happens: a cell completes a closed nexus of work tasks, and builds a copy of itself. Although he didn't know about cells, Kant spoke about this 230 years ago when he said that an organized being possesses a self-organizing propagating whole that is able to make more of itself. But although cells can do this, that fact is nowhere in our physics. It's not in our notion of matter, it's not in our notion of energy, it's not in our notion of information, and it's not in our notion of entropy. It's something else. It has to do with organization, propagation of organization, work, and constraint construction. All of this has to be incorporated into some new theory of organization.
I can push this a little farther by thinking of a puzzle about Maxwell's demon. Everybody knows about Maxwell's demon; he was supposed to separate fast molecules in one part of a partitioned box from the slow molecules by sending the slow molecules through a flap valve to another part of a partitioned box. From an equilibrium setting the demon could then build up the temperature gradient, allowing work to be extracted. There's been a lot of good scientific work showing that at equilibrium the demon can never win. So let's go straight to a non-equilibrium setting and ask some new questions.
Now think of a box with a partition and a flap valve. In the left side of the box there are N molecules and in the right side of the box there are N molecules, but the ones in the left side are moving faster than the ones in the right. The left side of the box is hotter, so there is a source of free energy. If you were to put a little windmill near the flap valve and open it, there would be a transient wind from the left to the right box, causing the windmill to orient itself towards the flap valve and spin. The system detects a source of free energy, the vane on the back of the windmill orients the windmill because of the transient wind, and then work is extracted. Physicists would say that the demon performs a measurement to detect the source of free energy. My new question is, how does the demon know what measurement to make?
Now the demon does a quite fantastic experiment. Using a magic camera he takes a picture and measures the instantaneous position of all the molecules in the left and right box. That's fine, but from that heroic experiment the demon cannot deduce that the molecules are going faster in the left box than in the right box. If you took two pictures a second apart, or if you measured the momentum transfer to the walls you could figure it out, but he can't do so with one picture. So how does the demon know what experiment to do? The answer is that the demon doesn't know what experiment to do.
Let's turn to the biosphere. If a random mutation happens by which some organism can detect and utilize some new source of free energy, and it's advantageous for the organism, natural selection will select it. The whole biosphere is a vast, linked web of work done to build things so that, stunningly enough, sunlight falls and redwood trees get built and become the homes of things that live in their bark. The complex web of the biosphere is a linked set of work tasks, constraint construction, and so on. Operating according to natural selection, the biosphere is able to do what Maxwell's demon can't do by himself. The biosphere is one of the most complex things we know in the universe, necessitating a theory of organization that describes what the biosphere is busy doing, how it is organized, how work is propagated, how constraints are built, and how new sources of free energy are detected. Currently we have no theory of it—none at all.
Right now I'm busy thinking about this incredibly important problem. The frustration I'm facing is that it's not clear how to build mathematical theories, so I have to talk about what Darwin called adaptations and then what he called pre-adaptations.
You might look at a heart and ask, what is its function? Darwin would answer that the function of the heart is to pump blood, and that's true—it's the cause for which the heart was selected. However, your heart also makes sounds, which is not the function of your heart. This leads us to the easy but puzzling conclusion that the function of a part of an organism is a subset of its causal consequences, meaning that to analyze the function of a part of an organism you need to know the whole organism and its environment. That's the easy part; there's an inalienable holism about organisms.
But here's the strange part: Darwin talked about pre-adaptations, by which he meant a causal consequence of a part of an organism that might turn out to be useful in some funny environment and therefore be selected. The story of Gertrude the flying squirrel illustrates this: About 63 million years ago there was an incredibly ugly squirrel that had flaps of skin connecting her wrists to her ankles. She was so ugly that none of her squirrel colleagues would play or mate with her, so one day she was eating lunch all alone in a magnolia tree. There was an owl named Bertha in the neighboring pine tree, and Bertha took a look at Gertrude and thought, "Lunch!" and came flashing down out of the sunlight with her claws extended. Gertrude was very scared and she jumped out of the magnolia tree and, surprised, she flew! She escaped from the befuddled Bertha, landed, and became a heroine to her clan. She was married in a civil ceremony a month later to a very handsome squirrel, and because the gene for the flaps of skin was Mendelian dominant, all of their kids had the same flaps. That's roughly why we now have flying squirrels.
The question is, could one have said ahead of time that Gertrude's flaps could function as wings? Well, maybe. Could we say that some molecular mutation in a bacterium that allows it to pick up calcium currents, thereby allowing it to detect a paramecium in its vicinity and to escape the paramecium, could function as a paramecium-detector? No. Knowing what a Darwinian pre adaptation is, do you think that we could say ahead of time, what all possible Darwinian pre adaptations are? No, we can't. That means that we don't know what the configuration space of the biosphere is.
It is important to note how strange this is. In statistical mechanics we start with the famous liter volume of gas, and the molecules are bouncing back and forth, and it takes six numbers to specify the position and momentum of each particle. It's essential to begin by describing the set of all possible configurations and momenta of the gas, giving you a 6N dimensional phase space. You then divide it up into little 6N dimensional boxes and do statistical mechanics. But you begin by being able to say what the configuration space is. Can we do that for the biosphere?
I'm going to try two answers. Answer one is No. We don't know what Darwinian pre adaptations are going to be, which supplies an arrow of time. The same thing is true in the economy; we can't say ahead of time what technological innovations are going to happen. Nobody was thinking of the Web 300 years ago. The Romans were using things to lob heavy rocks, but they certainly didn't have the idea of cruise missiles. So I don't think we can do it for the biosphere either, or for the econosphere.
You might say that it's just a classical phase space—leaving quantum mechanics out—and I suppose you can push me. You could say we can state the configuration space, since it's simply a classical, 6N-dimensional phase space. But we can't say what the macroscopic variables are, like wings, paramecium detectors, big brains, ears, hearing and flight, and all of the things that have come to exist in the biosphere.
All of this says to me that my tentative definition of an autonomous agent is a fruitful one, because it's led to all of these questions. I think I'm opening new scientific doors. The question of how the universe got complex is buried in this question about Maxwell's demon, for example, and how the biosphere got complex is buried in everything that I've said. We don't have any answers to these questions; I'm not sure how to get answers. This leaves me appalled by my efforts, but the fact that I'm asking what I think are fruitful questions is why I'm happy with what I'm doing.
I can begin to imagine making models of how the universe gets more complex, but at the same time I'm hamstrung by the fact that I don't see how you can see ahead of time what the variables will be. You begin science by stating the configuration space. You know the variables, you know the laws, you know the forces, and the whole question is, how does the thing work in that space? If you can't see ahead of time what the variables are, the microscopic variables for example for the biosphere, how do you get started on the job of an integrated theory? I don't know how to do that. I understand what the paleontologists do, but they're dealing with the past. How do we get started on something where we could talk about the future of a biosphere?
There is a chance that there are general laws. I've thought about four of them. One of them says that autonomous agents have to live the most complex game that they can. The second has to do with the construction of ecosystems. The third has to do with Per Bak's self-organized criticality in ecosystems. And the fourth concerns the idea of the adjacent possible. It just may be the case that biospheres on average keep expanding into the adjacent possible. By doing so they increase the diversity of what can happen next. It may be that biospheres, as a secular trend, maximize the rate of exploration of the adjacent possible. If they did it too fast, they would destroy their own internal organization, so there may be internal gating mechanisms. This is why I call this an average secular trend, since they explore the adjacent possible as fast as they can get away with it. There's a lot of neat science to be done to unpack that, and I'm thinking about it.
One other problem concerns what I call the conditions of co-evolutionary assembly. Why should co-evolution work at all? Why doesn't it just wind up killing everything as everything juggles with everything and disrupts the ways of making a living that organisms have by the adaptiveness of other organisms? The same question applies to the economy. How can human beings assemble this increasing diversity and complexity of ways of making a living? Why does it work in the common law? Why does the common law stay a living body of law? There must be some very general conditions about co-evolutionary assembly. Notice that nobody is in charge of the evolution of the common law, the evolution of the biosphere, or the evolution of the econosphere. Somehow, systems get themselves to a position where they can carry out coevolutionary assembly. That question isn't even on the books, but it's a profound question; it's not obvious that it should work at all. So I'm stuck.
STUART KAUFFMANPeer perspectives on Kauffman and his work:
"Order for Free"
Brian Goodwin: Stuart is primarily interested in the emergence of order in evolutionary systems. That's his fix. It's exactly the same as mine, in terms of the orientation towards biology, but he uses a very different approach. Our approaches are complementary with respect to the same problem: How do you understand emergent novelty in evolution? Emergent order? Stuart's great contributions are there.
__________
STUART KAUFFMAN is a biologist; professor of biochemistry at the University of Pennsylvania and a professor at the Santa Fe Institute; author of Origins of Order: Self Organization and Selection in Evolution (1993), and coauthor with George Johnson of At Home in the Universe (1995).
Stuart Kauffman: What kinds of complex systems can evolve by accumulation of successive useful variations? Does selection by itself achieve complex systems able to adapt? Are there lawful properties characterizing such complex systems? The overall answer may be that complex systems constructed so that they're on the boundary between order and chaos are those best able to adapt by mutation and selection.
Chaos is a subset of complexity. It's an analysis of the behavior of continuous dynamical systems — like hydrodynamic systems, or the weather — or discrete systems that show recurrences of features and high sensitivity to initial conditions, such that very small changes in the initial conditions can lead a system to behave in very different ways. A good example of this is the so called butterfly effect: the idea is that a butterfly in Rio can change the weather in Chicago. An infinitesimal change in initial conditions leads to divergent pathways in the evolution of the system. Those pathways are called trajectories. The enormous puzzle is the following: in order for life to have evolved, it can't possibly be the case that trajectories are always diverging. Biological systems can't work if divergence is all that's going on. You have to ask what kinds of complex systems can accumulate useful variation.
We've discovered the fact that in the evolution of life very complex systems can have convergent flow and not divergent flow. Divergent flow is sensitivity to initial conditions. Convergent flow means that even different starting places that are far apart come closer together. That's the fundamental principle of homeostasis, or stability to perturbation, and it's a natural feature of many complex systems. We haven't known that until now. That's what I found out twenty-five years ago, looking at what are now called Kauffman models — random networks exhibiting what I call "order for free."
Complex systems have evolved which may have learned to balance divergence and convergence, so that they're poised between chaos and order. Chris Langton has made this point, too. It's precisely those systems that can simultaneously perform the most complex tasks and evolve, in the sense that they can accumulate successive useful variations. The very ability to adapt is itself, I believe, the consequence of evolution. You have to be a certain kind of complex system to adapt, and you have to be a certain kind of complex system to coevolve with other complex systems. We have to understand what it means for complex systems to come to know one another — in the sense that when complex systems coevolve, each sets the conditions of success for the others. I suspect that there are emergent laws about how such complex systems work, so that, in a global, Gaia- like way, complex coevolving systems mutually get themselves to the edge of chaos, where they're poised in a balanced state. It's a very pretty idea. It may be right, too.
My approach to the coevolution of complex systems is my order-for-free theory. If you have a hundred thousand genes and you know that genes turn one another on and off, then there's some kind of circuitry among the hundred thousand genes. Each gene has regulatory inputs from other genes that turn it on and off. This was the puzzle: What kind of a system could have a hundred thousand genes turning one another on and off, yet evolve by creating new genes, new logic, and new connections?
Suppose we don't know much about such circuitry. Suppose all we know are such things as the number of genes, the number of genes that regulate each gene, the connectivity of the system, and something about the kind of rules by which genes turn one another on and off. My question was the following: Can you get something good and biology-like to happen even in randomly built networks with some sort of statistical connectivity properties? It can't be the case that it has to be very precise in order to work — I hoped, I bet, I intuited, I believed, on no good grounds whatsoever — but the research program tried to figure out if that might be true. The impulse was to find order for free. As it happens, I found it. And it's profound.
One reason it's profound is that if the dynamical systems that underlie life were inherently chaotic, then for cells and organisms to work at all there'd have to be an extraordinary amount of selection to get things to behave with reliability and regularity. It's not clear that natural selection could ever have gotten started without some preexisting order. You have to have a certain amount of order to select for improved variants.
Think of a wiring diagram that has ten thousand light bulbs, each of which has inputs from two other light bulbs. That's all I'm going to tell you. You pick the inputs to each bulb at random, and put connecting wires between them, and then assign one of the possible switching rules to each of the light bulbs at random. One rule might be that a light bulb turns on at the next moment if both of its inputs are on at the previous moment. Or it might turn on if both of its inputs are off.
If you go with your intuition, or if you ask outstanding physicists, you'll reach the conclusion that such a system will behave chaotically. You're dealing with a random wiring diagram, with random logic — a massively complex, disordered, parallel- processing network. You'd think that in order to get such a system to do something orderly you'd have to build it in a precise way. That intuition is fundamentally wrong. The fact that it's wrong is what I call "order for free."
There are other epistemological considerations regarding "order for free." In the next few years, I plan to ask, "What do complex systems have to be so that they can know their worlds?" By "know" I don't mean to imply consciousness; but a complex system like the E. coli bacterium clearly knows its world. It exchanges molecular variables with its world, and swims upstream in a glucose gradient. In some sense, it has an internal representation of that world. It's also true that IBM in some sense knows its world. I have a hunch that there's some deep way in which IBM and E. coli know their worlds in the same way. I suspect that there's no one person at IBM who knows IBM's world, but the organization gets a grip on its economic environment. What's the logic of the structure of these systems and the worlds that they come to mutually live in, so that entities that are complex and ordered in this way can successfully cope with one another? There must be some deep principles.
For example, IBM is an organization that knows itself, but I'm not quite talking about Darwinian natural selection operating as an outside force. Although Darwin presented natural selection as an external force, what we're thinking of is organisms living in an environment that consists mostly of other organisms. That means that for the past four billion years, evolution has brought forth organisms that successfully coevolved with one another. Undoubtedly natural selection is part of the motor, but it's also true that there is spontaneous order.
By spontaneous order, or order for free, I mean this penchant that complex systems have for exhibiting convergent rather than divergent flow, so that they show an inherent homeostasis, and then, too, the possibility that natural selection can mold the structure of systems so that they're poised between these two flows, poised between order and chaos. It's precisely systems of this kind that will provide us with a macroscopic law that defines ecosystems, and I suspect it may define economic systems as well.
While it may sound as if "order for free" is a serious challenge to Darwinian evolution, it's not so much that I want to challenge Darwinism and say that Darwin was wrong. I don't think he was wrong at all. I have no doubt that natural selection is an overriding, brilliant idea and a major force in evolution, but there are parts of it that Darwin couldn't have gotten right. One is that if there is order for free — if you have complex systems with powerfully ordered properties — you have to ask a question that evolutionary theories have never asked: Granting that selection is operating all the time, how do we build a theory that combines self-organization of complex systems — that is, this order for free — and natural selection? There's no body of theory in science that does this. There's nothing in physics that does this, because there's no natural selection in physics — there's self organization. Biology hasn't done it, because although we have a theory of selection, we've never married it to ideas of self-organization. One thing we have to do is broaden evolutionary theory to describe what happens when selection acts on systems that already have robust self-organizing properties. This body of theory simply does not exist.
There are a couple of parallels concerning order for free. We've believed since Darwin that the only source of order in organisms is selection. This is inherent in the French biologist François Jacob's phrase that organisms are "tinkered-together contraptions." The idea is that evolution is an opportunist that tinkers together these widgets that work, and the order you see in an organism has, as its source, essentially only selection, which manages to craft something that will work. But if there's order for free, then some of the order you see in organisms is not due to selection. It's due to something somehow inherent in the building blocks. If that's right, it's a profound shift, in a variety of ways.
The origin of life might be another example of order for free. If you have complex-enough systems of polymers capable of catalytic action, they'll self-organize into an autocatalytic system and, essentially, simply be alive. Life may not be as hard to come by as we think it is.
There are some immediate possibilities for the practical application of these theories, particularly in the area of applied molecular evolution. In l985, Marc Ballivet and I applied for a patent based on the idea of generating very, very large numbers of partly or completely random DNA sequences, and therefrom RNA sequences, and from that proteins, to learn how to evolve biopolymers for use as drugs, vaccines, enzymes, and so forth. By "very large" I mean numbers on the order of billions, maybe trillions of genes — new genes, ones that have never before existed in biology. Build random genes, or partly random genes. Put them into an organism. Make partly random RNA molecules; from that make partly random proteins, and learn from that how to make drugs or vaccines. Within five years, I hope we'll be able to make vaccines to treat almost any disease you want, and do it rapidly. We're going to be able to make hundreds of new drugs.
A related area is that probably a hundred million molecules would suffice as a roughed-in universal toolbox, to catalyze any possible reaction. If you want to catalyze a specific reaction, you go to the toolbox, you pull out a roughed-in enzyme, you tune it up by some mutations, and you catalyze any reaction you want. This will transform biotechnology. It will transform chemistry.
There are also connections to be made between evolutionary theory and economics. One of the fundamental problems in economics is that of bounded rationality. The question in bounded rationality is, How can agents who aren't infinitely rational and don't have infinite computational resources get along in their worlds? There's an optimizing principle about precisely how intelligent such agents ought to be. If they're either too intelligent or too stupid, the system doesn't evolve well.
Economist colleagues and I are discussing the evolution of a technological web, in which new goods and services come into existence and in which one can see bounded rationality in a nonequilibrium theory of price formation. It's the next step toward understanding what it means for complex systems to have maps of their world and to undertake actions for their own benefit which are optimally complex or optimally intelligent — boundedly rational. It's also part of the attempt to understand how complex systems come to know their world.
Brian Goodwin: Stuart is primarily interested in the emergence of order in evolutionary systems. That's his fix. It's exactly the same as mine, in terms of the orientation toward biology, but he uses a very different approach. Our approaches are complementary with respect to the same problem: How do you understand emergent novelty in evolution? Emergent order? Stuart's great contributions are there.
The notion of life at the edge of chaos is absolutely germane to Stuart's work. He didn't discover that phrase, but his work has always been concerned with precisely that notion, of how you have an immensely complex system with patterns of interaction that don't obviously lead anywhere, and suddenly out pops order.
That's what he discovered when he was a medical student in the sixties messing about with computers. He worked with François Jacob's and Jacques Monod's ideas about controls. He implemented those on his computer, and he looked at neural networks. It's the same thing that inspired me, but we went in different directions. I went in the direction of the organism as a dynamic organization, and he was much closer to Warren McCulloch and the notion of logical networks and applying it to gene networks. Stuart and I have always had this complementary approach to things, and yet we come to exactly the same conclusions about the emergence of order out of chaotic dynamics. Stuart has the fastest flow of interesting new ideas of anybody I've ever met. I've learned a lot from him.
W. Daniel Hillis: Stuart Kauffman is a strange creature, because he's a theoretical biologist, which is almost an oxymoron. In physics, there are the theoretical types and the experimental types, and there's a good understanding of what the relationship is between them. There's a tremendous respect for the theoreticians. In physics, the theory is almost the real stuff, and the experiments are just an approximation to test the theory. If you get something a little bit wrong, then it's probably an experimental error. The theory is the thing of perfection, unless you find an experiment that shows that you need to shift to another theory. When Eddington went off during a solar eclipse to measure the bending of starlight by the sun and thus to test Einstein's general relativity theory, somebody asked Einstein what he would think if Eddington's measurements failed to support his theory, and Einstein's comment was, "Then I would have felt sorry for the dear Lord. The theory is correct."
In biology, however, this is reversed. The experimental is on top, and the theory is considered poor stuff. Everything in biology is data. The way to acquire respect is to spend hours in the lab, and have your students and postdocs spend hours in the lab, getting data. In some sense, you're not licensed to theorize unless you get the data. And you're allowed to theorize only about your own data — or at the very least you need to have collected data before you get the right to theorize about other data.
Stuart is of the rare breed that generates theories without being an experimentalist. He takes the trouble to understand things, such as dynamical-systems theory, and tries to connect those into biology, so he becomes a conduit of ideas that are coming out of physics, from the theorists in physics, into biology.
Daniel C. Dennett: Stuart Kauffman and his colleague Brian Goodwin are particularly eager to discredit the powerful image first made popular by the great French biologists Jacques Monod and François Jacob — the image of Mother Nature as a tinkerer engaged in the opportunistic handiwork that the French call bricolage. Kauffman wants to stress that the biological world is much more a world of Newtonian discoveries than of Shakespearean creations. He's certainly found some excellent demonstrations to back up this claim. Kauffman is a meta-engineer. I fear that his attack on the metaphor of the tinkerer feeds the yearning of those who don't appreciate Darwin's dangerous idea. It gives them a false hope that they're seeing not the forced hand of the tinkerer but the divine hand of God in the workings of nature. Kauffman gets that from Brian Goodwin. John Maynard Smith has been pulling Kauffman in the other direction — very wisely so, in my opinion.
Stephen Jay Gould: Stuart Kauffman is very similar to Brian Goodwin, in that they are both trying to explore the relevance of the grand structuralist tradition, which Darwinian functionalism never paid a whole lot of attention to. Stuart is different from Brian, in that Brian focuses upon the morphology of organisms. Stuart's main interests are in questions of the origin of life, the origins of molecular organization, which I don't understand very well. I'm not as quantitative as he is, so I don't follow all the arguments in his book. He's trying to understand what aspects of organic order follow from the physical principles of matter, and the mathematical structure of nature, and need not be seen as Darwinian optimalities produced by natural selection.
He's following in the structuralist tradition, which should not be seen as contrary to Darwin but as helpful to Darwin. Structural principles set constraints, and natural selection must work within them. His "order for free" is an outcome of sets of constraints; it shows that a great deal of order can be produced just from the physical attributes of matter and the structural principles of organization. You don't need a special Darwinian argument; that's what he means by "order for free." It's a very good phrase, because a strict Darwinian thinks that all sensible order has to come from natural selection. That's not true.
J. Doyne Farmer: Stuart Kauffman was in a theoretical-biology group at the University of Chicago, run by Jack Cowan, that included people like Arthur Winfree, Leon Glass, and several others who have become some of the most famous theoretical biologists. The fact that any of these guys are still employed as scientists is a tribute to their ability; most of the biology establishment hates theoreticians and surviving as a theoretical biologist is difficult. Stuart survived, in part, by doing experiments as well, but I think his real passion has always been for theoretical biology.
Francisco Varela: Stuart has taken the notion of seeing emerging levels in biological organizations into explicit forms and mechanisms. In his early work on genetic networks, he did some very fundamental things. He took something that was vague and made it into a concrete example that was workable.
I have a little harder time with his last book. The monster, The Origins of Order. Although many of the pieces in there have a flavor of something quite interesting, it doesn't seem to me that the book hangs together as a whole. There's too much of "Let's assume this, and let's assume that, and if this were right, then...." But the basic idea is that we're back to the notion of evolution having intrinsic factors, and in this regard it has to be right. It's like Nick Humphrey's book. Although the smell is the right one, I'm not so sure I can buy the actual theory that he's trying to stitch together.
Stuart is one of the most competent people we have around when it comes to dealing with molecular biological networks. He's one of the great people, in that he has put some important bricks in that edifice, but that edifice has been built by many other people as well: Gould, Eldredge, Margulis, Goodwin. If there's a slight criticism I would make of Stuart, it's that sometimes he's not so clear in acknowledging that. What's happening here is that there's an evolution — or revolution — in biology, which is going beyond Darwin. But this revolution is not reducible to Stuart's own way of expressing it.
Niles Eldredge: Stuart is amazing. He had me on the floor of a cab, doubled up in laughter, the first time I met him. He was imitating all of the variant accents of the Oxford dons in philosophy. He's an amazingly funny guy, very likable guy, and extremely bright, of course. He takes what I used to call a transformationalist approach to evolution.
The standard way of looking at evolution is that evolution is a matter of transforming the physical properties of organisms. Stuart's got models jumping around from adaptive peak to adaptive peak, to explain the early Cambrian explosion. There's so much missing between the way he's thinking about things and the way I'm thinking about things that we've never really connected. We've talked, and I've put him together with other people who use computers to simulate evolutionary patterns, but there's just too much of a gap in our approach to things for there to be much useful dialog between us.
Nicholas Humphrey: Kauffman is less radical than Goodwin, at least nowadays. Kauffman originally would have said that natural selection doesn't play a very important role, but he's been persuaded that even if the possibilities that biology has to play with are determined by the properties of complex systems, nonetheless the ones we see in nature are those that have been selected. The world throws up possibilities, and then natural selection gets to work and ensures that just certain ones survive.
Kauffman is doing wonderful work, and he's certainly put the cat among the pigeons for old fashioned neo-Darwinism. He's forced people to recognize that selection may not be the only designing force in nature. But he's not claiming to be the new Darwin. We don't need a new Darwin.
Tags:
1 comment:
There is much of merit in Kauffman's work. However, his model (and its the evidential basis) has been superseded by that presented by Williams and Frausto da Silva in their superb "The Evolution of Chemistry" This, in common with my own writings, underlines the continuity of evolutionary processes from stellar nucleosynthesis through to the biological realm and to our own species.
My own latest book "The Goldilocks Effect" (in which my use of the term "prevailing conditions" corresponds to Kauffman's "adjacent possible')extends this evolutionary continuum to include the evolution of technology and makes projections beyond.“The Goldilocks Effect” in the e-book formats is available for free download from the “Unusual Perspectives” website.
Post a Comment