Showing posts with label anthropocentrism. Show all posts
Showing posts with label anthropocentrism. Show all posts

Tuesday, June 04, 2013

George Dvorsky - How Does the Anthropic Principle Change the Meaning of the Universe?

I've been meaning to post this article for a while now, since this is a topic that comes up often in my rejection of certain Buddhist beliefs and, by extension, my rejection of certain beliefs in Wilberian Integral Theory (specifically, that the universe gave birth to itself to become conscious of itself).

I have serious issues with the anthropic principle, or at least the strong anthropic principle, because it asserts that the Universe was essentially compelled to produce conscious beings (namely, human consciousness).

Here is a brief overview from the Wikipedia entry on AP:
In astrophysics and cosmology, the anthropic principle (from the Greek, anthropos, human) is the philosophical consideration that observations of the physical Universe must be compatible with the conscious life that observes it. Some proponents of the anthropic principle reason that it explains why the Universe has the age and the fundamental physical constants necessary to accommodate conscious life. As a result, they believe it is unremarkable that the universe's fundamental constants happen to fall within the narrow range thought to be compatible with life.[1] 
The strong anthropic principle (SAP) as explained by Barrow and Tipler (see variants) states that this is all the case because the Universe is compelled, in some sense, for conscious life to eventually emerge. Critics of the SAP argue in favor of a weak anthropic principle (WAP) similar to the one defined by Brandon Carter, which states that the universe's ostensible fine tuning is the result of selection bias: i.e., only in a universe capable of eventually supporting life will there be living beings capable of observing any such fine tuning, while a universe less compatible with life will go unbeheld. English writer Douglas Adams, who wrote The Hitchhiker's Guide to the Galaxy, used the metaphor of a living puddle examining its own shape, since, to those living creatures, the universe may appear to fit them perfectly (while in fact, they simply fit the universe perfectly).
John Barrow and Frank Tipler argue for three possible elaborations of the strong anthropic principle - the second one (observers are necessary to bring the universe into being) is where I really balk at this idea, and this is precisely the area where Buddhism and Wilberian integral theory go awry.
  • "There exists one possible Universe 'designed' with the goal of generating and sustaining 'observers'." This can be seen as simply the classic design argument restated in the garb of contemporary cosmology. It implies that the purpose of the universe is to give rise to intelligent life, with the laws of nature and their fundamental physical constants set to ensure that life as we know it will emerge and evolve.
  • "Observers are necessary to bring the Universe into being." Barrow and Tipler believe that this is a valid conclusion from quantum mechanics, as John Archibald Wheeler has suggested, especially via his idea that information is the fundamental reality, see It from bit, and his Participatory Anthropic Principle (PAP) which is an interpretation of quantum mechanics associated with the ideas of John von Neumann and Eugene Wigner.
  • "An ensemble of other different universes is necessary for the existence of our Universe." By contrast, Carter merely says that an ensemble of universes is necessary for the SAP to count as an explanation.
Anyway, here is a good article on AP from George Dvorsky at io9. For an interesting debate on this topic, see the Edge Conversation with Lee Smolin and Leonard Susskind.

How does the Anthropic Principle change the meaning of the universe?

By GEORGE DVORSKY 
3/08/13
Image via Luc Perrot.

One of the more extraordinary things about the universe is that it has produced beings who can observe it — namely, us. Its laws and constants are so precise that, if they were even slightly modified, no human would be here to see it. Many cosmologists and philosophers have wondered if we should read anything into all this preciseness: Are the finely-tuned physical laws that surround us mere coincidence, or does it imply that we are somehow meant to be here? That's where the Anthropic Principle comes into play.

The Anthropic Principle (AP) is that hazy grey area where philosophy meets science. And in fact, many scientists loathe it for this very reason. It's untestable, they argue, and tautological — a skewed form of reasoning in which the principle is basically being used to prove itself.



And indeed, the AP does seem like a strange concept at first. It essentially states that we will only find ourselves in a universe that's capable of giving rise to us. Put another way, observations of the universe must be compatible with the conscious life that observes it.

It's a principle that makes perfect sense — and for some, no sense at all. But like so many things in science and philosophy, the devil is in the details.

The AP forces us to take a giant step back and evaluate the conditions of the universe in consideration of our presence within it. For scientists, it's a kind of ‘40 foot perspective' that can help illuminate — and even possibly explain — some of the more surprising aspects of cosmology. And at the very least, it serves as a constant reality check to remind us that we will always be subject to observational selectional effects; no matter where we go, we will always be there.

A good thought experiment in this regard comes from the Canadian philosopher John Leslie. In his book, Universes, he asks us to imagine a man facing a firing squad of fifty expert marksman. After aiming and firing, the executioners miss their mark.

Now, there are two ways in which we can evaluate this surprising outcome. We can either shrug our shoulders and point to the obvious, that they simply missed. Or we can come up with some explanations as to why they all missed. This latter point is very much at the heart of anthropic reasoning.


Origins


The AP has been around for quite some time, though it only really took on its modern form in the last forty years.

Early efforts to come to grips with observational effects were expressed in Hume's Dialogues Concerning Natural Religion, and Kant's ideas about how our experience of the world is formulated by our sensory and intellectual faculties. Back in the 1920s, James Jeans observed that, "the physical conditions under which life is possible form only a tiny fraction of the range of physical conditions which prevail in the universe as a whole." Likewise, his contemporary, Arthur Eddington, speculated about "selective subjectivism," the idea that the laws of nature are indirectly imposed by the human mind, which in turn determines (and constrains) what we know about the universe.



More recently, some scientists have used it to explain the series of bizarre "large-number coincidences" in physics and cosmology. These are the surprisingly large order-of-magnitude connections that exist between (apparently) unrelated physical constants and cosmological parameters.

For example, the electromagnetic force is 39 orders of magnitude stronger than gravity. If it was any closer in strength, stars would have collapsed long before life could emerge. Or, the universe's vacuum energy density is about 120 orders of magnitude lower than some theoretical estimates, which, if any higher, would have blown the universe apart. And the neutron is heavier than the proton — but not so heavy that neutrons cannot be bound in nuclei where conservation of energy prevents the neutrons from decaying. Without neutrons, we wouldn't have the heavier elements needed for building complex life. There are many other examples, each one pointing to extreme specificity.

In 1961, Robert. H. Dickie used a prototypical version of the AP to explain away these coincidences, saying that physicists were reading too much into it. These large numbers, he argued, are a necessary coincidence (or prerequisite) for the presence of intelligent beings. If these parameters were not so, life would not have arisen. And in turn, we wouldn't be here to marvel at the ‘surprisingness' of these physical constants and laws.


Enter Brandon Carter


Then, in 1974, the philosopher Brandon Carter kindled the modern interpretation of these ideas, what he dubbed the Anthropic Principle. But rather than settle on just one perspective or definition, he said there were two different ways we can approach the issue.

Specifically, he proposed the Weak Anthropic Principle (WAP) and the Strong Anthropic Principle (SAP). Both approaches imply that these anthropic coincidences were not the result of chance, but were instead built directly into the structure of the universe.

Of the WAP he said:
We must be prepared to take into account the fact that our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers.
And of the SAP he said:
The universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage.
Indeed, the SAP is a bit of a mind frak. Carter essentially argued that, if the SAP is true, the universe must to give rise to intelligent observers. The WAP, on the other hand, simply implies that the universe we observe must have the conditions to support intelligent life, but that life doesn't necessarily have to arise.

So, if the SAP is true, then the universe is indeed here for us.

Keep in mind that these are philosophical thought experiments, and not scientific statements per se. To a certain extent, philosophers are the conjurors of proto-scientific concepts — musings that should in turn be proven or disproven through the application of the scientific method.

Moreover, this doesn't imply or prove that God or some other Prime Mover exists, though many have taken it to that extreme. All the AP does in this regard is tell us that the laws of the universe should be understood through the context of the presence of observers.

Interestingly, Carter later regretted using the word ‘anthropic.' It has misled some into thinking that he was referring to Homo sapiens specifically (or that observers were limited to carbon based life). But his principle applies to any observer anywhere in the universe.


Image: "Wonder - Zena Gazing at the Moon" by Alex Grey (1996) .

For example, a dolphin, which is a conscious being, can be considered an observer. Same goes for a self-aware robot on the other side of the universe. Or more conceptually, imagine a universe in which only evolving streams of information can exist. Eventually, a self-aware algorithm could emerge that's capable of assessing its surroundings. This would be an observer, too, but one far removed from our own experience.

Since Carter's original elucidation, the AP has literally been re-interpreted and re-defined hundreds of times. Other proposed names include "self-locating belief" and "indexical information" (not difficult to see why these didn't catch on). The "fine-tuning argument", however, has gained traction as a kind of substitute term, or correlated area of inquiry.

One of the more interesting re-evaluations of Carter's original idea comes from the mathematician John Barrow and physicist Frank Tipler. They devised a third principle, the Final Anthropic Principle, which states that intelligent information processing must come into evidence in the universe, and, once it comes into existence, it will never die out.

If this is true, not only is the universe here for us, but its configuration is such that we will become its permanent residents (in some form or another).


Welcome to the multiverse


As noted, many scientists hate the AP — and often with a passion. Critics contend that it's a product of cyclical thinking, and that's its self-evident — or that life should be simply be thought of as mere epiphenomenon (our presence in the universe is merely a side-effect, or coincidence).

Others, like physicist Lee Smolin, argue that the characteristics of the universe can be explained in other ways, such as his theory of cosmological natural selection. As Smolin told io9, "The Anthropic Principle is simply incapable of making a falsifiable prediction for any kind of testable experiment."

At the same time, however, scientists like Sir Martin Rees have found it to be quite helpful, particularly when applying Carter's WAP to some modern interpretations of cosmology. In fact, some physicists, like Rees, use it when explaining (and reconciling) the multiverse theory.

According to this theory, our universe is not the only one, and also not the only kind. Given the possibility of a near infinite set of variable universes, there could be alternative universes out there with different constants and parameters. In some universes, gravity will be stronger, or the speed of light slower, and so on.

In the space of all possible universes, therefore, there will be a small subset of universes in which life can exist, and a larger subset in which life is impossible. Clearly, we find ourselves in one of the life-friendly universes. Other life-friendly universes with slightly different laws, or alternative modalities, may allow for other types of observers, but observers nonetheless; they too will be subject to the anthropic effect.

On the other hand, universes that are unfriendly to life can never be observed — but that doesn't mean they're not out there. It's just that nobody will be able to document such universes and record their unique characteristics. Unless, of course, as some interpretations of quantum physics suggests, universes can only exist in the presence of observers; no observer, no universe.


The inescapable observation selection effect


Critics and proponents aside, there's one last aspect to the AP that needs to be brought out — and that's its role as an observational principle.

Tautology or not, and regardless of whether multiverses exist, it highlights a fundamental problem or limitation that all scientists face when they're making any kind of proclamation about the nature of the cosmos — and that is, as observers, we will always be subject to observational selection effects.



Consequently, it serves as a kind of reality check, one that's somewhat akin to a soft interpretation of the Heisenberg Uncertainty Principle, or even Plato's Cave. It's the oppressive realization that everything we observe is being observed. And that in order for it be be observed by that something, the environment has to be conducive for that something to exist. We can only take measurements and formulate judgements in a modality in which that can happen.

As Oxford philosopher Nick Bostrom has said, "all observations require the existence of an appropriately positioned observer." Indeed, our data is not only filtered by the limitations of our instruments, "but also by the precondition that somebody be there to ‘have' the data yielded by the instruments (and to build the instruments in the first place)." The biases that occur due to these preconditions are what's referred to as observation selection effects.

So, in answer to the headline of this article — is this universe here just for us — the Anthropic Principle alone cannot provide the answer. But it does force us to take pause and acknowledge the efficacy of the suggestion. Whether science can now run with it and provide us with an answer is an open question.

In the meantime, take solace in the fact that you're a piece of the universe that's observing itself.

Sources not cited: Anthropic Bias: Observation Selection Effects in Science and Philosophy (Studies in Philosophy) and "The Origin of the Modern Anthropic Principle."
RELATED 
 
What is the purpose of the Universe? Here is one possible answer -- The more we learn about the universe, the more we discover just how diverse all its planets, stars, nebulae and unexplained chunks of matter really… Read the article
Images: Ase/shutterstock, galaxy/dna: physics.sfsu.edu, "Unraveling the Riddle of Plato's Cave."

Saturday, March 09, 2013

George Dvorsky on the Anthropic Principle at io9


Over at io9, George Dvorsky has posted an interesting and well-done article on the anthropic principle. This topic is of special interest to me because it's one of the issues that bothers me about Buddhism, integral theory, and various other theories that posit a universe with consciousness as an essential element of its existence.

Theories of emergence make a lot more sense to me without the need for panpsychism as an explanatory feature for the existence of consciousness.

How does the Anthropic Principle change the meaning of the universe?

George Dvorsky
March 8, 2013

One of the more extraordinary things about the universe is that it has produced beings who can observe it — namely, us. Its laws and constants are so precise that, if they were even slightly modified, no human would be here to see it. Many cosmologists and philosophers have wondered if we should read anything into all this preciseness: Are the finely-tuned physical laws that surround us mere coincidence, or does it imply that we are somehow meant to be here? That's where the Anthropic Principle comes into play.

The Anthropic Principle (AP) is that hazy grey area where philosophy meets science. And in fact, many scientists loathe it for this very reason. It's untestable, they argue, and tautological — a skewed form of reasoning in which the principle is basically being used to prove itself.


And indeed, the AP does seem like a strange concept at first. It essentially states that we will only find ourselves in a universe that's capable of giving rise to us. Put another way, observations of the universe must be compatible with the conscious life that observes it.

It's a principle that makes perfect sense — and for some, no sense at all. But like so many things in science and philosophy, the devil is in the details.

The AP forces us to take a giant step back and evaluate the conditions of the universe in consideration of our presence within it. For scientists, it's a kind of ‘40 foot perspective' that can help illuminate — and even possibly explain — some of the more surprising aspects of cosmology. And at the very least, it serves as a constant reality check to remind us that we will always be subject to observational selectional effects; no matter where we go, we will always be there.

A good thought experiment in this regard comes from the Canadian philosopher John Leslie. In his book, Universes, he asks us to imagine a man facing a firing squad of fifty expert marksman. After aiming and firing, the executioners miss their mark.

Now, there are two ways in which we can evaluate this surprising outcome. We can either shrug our shoulders and point to the obvious, that they they simply missed. Or we can come up with some explanations as to why they all missed. This latter point is very much at the heart of anthropic reasoning.

Origins


The AP has been around for quite some time, though it only really took on its modern form in the last forty years.

Early efforts to come to grips with observational effects were expressed in Hume's Dialogues Concerning Natural Religion, and Kant's ideas about how our experience of the world is formulated by our sensory and intellectual faculties. Back in the 1920s, James Jeans observed that, "the physical conditions under which life is possible form only a tiny fraction of the range of physical conditions which prevail in the universe as a whole." Likewise, his contemporary, Arthur Eddington, speculated about "selective subjectivism," the idea that the laws of nature are indirectly imposed by the human mind, which in turn determines (and constrains) what we know about the universe.


More recently, some scientists have used it to explain the series of bizarre "large-number coincidences" in physics and cosmology. These are the surprisingly large order-of-magnitude connections that exist between (apparently) unrelated physical constants and cosmological parameters.

For example, the electromagnetic force is 39 orders of magnitude stronger than gravity. If it was any closer in strength, stars would have collapsed long before life could emerge. Or, the universe's vacuum energy density is about 120 orders of magnitude lower than some theoretical estimates, which, if any higher, would have blown the universe apart. And the neutron is heavier than the proton — but not so heavy that neutrons cannot be bound in nuclei where conservation of energy prevents the neutrons from decaying. Without neutrons, we wouldn't have the heavier elements needed for building complex life. There are many other examples, each one pointing to extreme specificity.

In 1961, Robert. H. Dickie used a prototypical version of the AP to explain away these coincidences, saying that physicists were reading too much into it. These large numbers, he argued, are a necessary coincidence (or prerequisite) for the presence of intelligent beings. If these parameters were not so, life would not have arisen. And in turn, we wouldn't be here to marvel at the ‘surprisingness' of these physical constants and laws.
Read the whole interesting article.

Saturday, January 19, 2013

Ken Wilber - Response to Critical Theory in Defense of Integral Theory

Apparently the long drought of new writing from integral philosopher Ken Wilber has come to an end, which can only mean his health has improved considerably - that alone is great news. He says he has completed Sex, Karma, Creativity, which is volume 2 of the Kosmos Trilogy, first volume being Sex, Ecology, Spirituality (1996).

These pieces are two long endnotes, and one excerpt, written "in response to recent articles on Critical Theory and Integral Theory, and, while appreciating certain aspects of Critical Theory, come out strongly in favor of Integral Theory." As Bruce Alderman mentions in his post about these new excerpts, Wilber likely means "critical realism" in his title, which is a very different thing than critical theory.

For clarity, critical realism "highlights a mind-dependent aspect of the world, which reaches to understand (and comes to understanding of) the mind independent world." Wilber's main point here, with which I disagree, is that CR in hardly integral because he denies the role of consciousness in the evolution of the universe - he describes the CR position as "ripping consciousness out of the Kosmos and leaving “the real” to be merely a denuded “ontology”."

What fails to be mentioned here is that we can combine CR philosophy - the idea that there is an ontologically "real" universe out there, the mind independent world - with the fields of emergence and complex adaptive systems, thereby removing the anthropocentric necessity of consciousness being an organizing principle of the universe.

RESPONSE TO CRITICAL THEORY IN DEFENSE OF INTEGRAL THEORY

January 17th, 2013


The following are two long endnotes, and one excerpt, from my recently finished book, Sex, Karma, Creativity, which is volume 2 of the Kosmos Trilogy, whose first volume is Sex, Ecology, Spirituality. They were written, in part, in response to recent articles on Critical Theory and Integral Theory, and, while appreciating certain aspects of Critical Theory, come out strongly in favor of Integral Theory. –Ken Wilber

Chapter “Individual and Social,” endnote 4:

[1] 4. Integral Theory (IT) and Critical Realism (CR) share many items in common, but there are some deep differences as well. To begin with, Critical Realism separates epistemology and ontology, and makes ontology the level of the “real”; whereas, for Integral Theory, epistemology and ontology cannot so be fragmented and fractured, but rather are two correlative dimensions of every Whole occasion (part of the tetra-dimension of every holon). Realism maintains that there are ontological realities that are not dependent upon humans or human theories—including much of the level of the “real”—including items such as atoms, molecules, cells, etc.—and IT agrees, with one important difference: IT is panpsychic (a term I’m not fond of, preferring “pan‑interiorist,” meaning all beings have interiors or proto-consciousness, a la Whitehead, Peirce, Leibnitz, etc.)—to wit, atoms do not depend upon being known by humans, but they do depend upon being known by each other. The “prehension” aspect of atoms (proto-knowing, proto-feeling, proto-consciousness) helps to co-enact the being or ontology aspect of the atoms for each other—their own epistemology and ontology are thus inseparable and co-creative. The atom’s prehension is part of its very ontology (and vice versa), and as each atom prehends its predecessor, it is instrumental in bringing it forth or enacting it, just as its own being will depend in part on being prehended/known/included by its own successor. If, for the moment, we leave Quantum Mechanics out of the picture (see below), none of this depends on humans for its existence or being, and yet the atom’s prehension-feeling-knowing is an intrinsic part of this level of the “real.” Consciousness is not something that can be sucked out of being to leave an awareness-free “ontology” lying around waiting to be known by some other sentient being; consciousness, rather, goes all the way down, and forms part of the intrinsic awareness and intrinsic creativity of each ontological being or holon. Whitehead’s “ultimate category”—namely, “the creative advance into novelty”—is part of the prehension of each and every being in existence, and the creative-part cannot be ripped from the being‑part without severe violence. To postulate the most fundamental level of reality as merely ontology—being without knowing or consciousness or creativity—is basically a 1st-tier move that shatters the Wholeness of this and every real occasion.

Likewise, spiritual transcendence (Eros) reaches all the way down as well. In IT’s neoWhiteheadian view, each new moment comes to be as a subject (with all 4 quadrants), and it prehends (tetra-prehends) its predecessor, which is now an object (in all 4 quadrants) for this new subject. The new subject “transcends and includes” the old subject (now as object), and thus they mutually co-create each other: the old subject that is now object and is included in the new subject helps shape the new subject itself, by the simple fact of being included in it, actually embraced by it, and thus to some degree determining it. Likewise, the new subject, in including the old subject, is instrumental in bringing it forth or enacting it, co-creating its very being as a new object as it does so—and the new subject then adds its own degree of creativity, consciousness, or novelty, and thus actually co-creates a new being in the very act of prehensive unification. This “transcend and include” goes all the way down to the smallest micro‑subatomic particles, and all way through the actual meso developmental levels (where, as Kegan puts it for human development, “the subject of one level becomes the object of the subject of the next”—which is the meso view of Whitehead’s prehension—namely, that “the subject of this moment becomes the object of the subject of the next”—but acting now on a larger, higher, more complex, more conscious level), and all the way to the macro practices of meditation, where transcendence is the overall goal and occurs through the objectification of state-stages from gross to subtle to causal to True Self to ultimate Spirit (with each state-stage transcending and including its predecessor—the subject of one becoming the object of the next). This Eros (which certainly can be viewed as spiritual) is a primary driver of evolution itself, starting all the way back with the Big Bang and all the way through to ultimate Enlightenment. As Erich Jantsch put it, evolution is “self-organization through self-transcendence,” and that “transcend and include” is the very form of the moment-to-moment unfolding of reality.

Further, what CR describes as “real”—or “the intransitive level”—is actually and mostly turquoise reality. This is not the same “real” that is found at the red level, the amber level, the orange level, the green level, or the indigo level. If CR described what it meant by “ontology” to someone at red, they would flatly disagree, with CR’s version of ontology being “over their heads.” In fact, what most sophisticated thinkers today call “ontology” is actually the turquoise level of being-consciousness—and not as a mere description, but a real ontic-epistemic structure of the universe. These levels of being-consciousness are not just levels of a human being, but levels of the Kosmos itself (and those different levels are different worlds!). So I am certainly not saying that this “turquoise reality” or ontology isn’t real, only that it is inseparable from the prehensive-knowing-consciousness of the turquoise level of being-consciousness itself. There is no way around this—precisely because of panpsychism (such as subscribed to by Leibnitz, Whitehead, or Peirce). The turquoise level looks at the atomic level, the molecular level, the cellular biological level, etc., and concludes they have a reality in and of themselves—an ontology—but not only is it describing those levels as what they look like from turquoise—even if we ignore that part—they are overlooking the prehensive-consciousness-knowing dimension of the atoms, molecules, and cells themselves, an epistemic dimension that co-creates the ontic dimension with the being aspect of those holons (and vice versa)—again, epistemology and ontology are two different dimensions of the same Wholeness of the real occasion, and cannot be fragmented without genuine violence to the Kosmos.

Thus, for example, take molecules during the magic era. “Molecules” did not “ex-ist” (meaning, “stand out”) anywhere in the magic world—there was nothing in the consciousness of individuals at magic that corresponded with “molecules.” But we moderns—we at turquoise—assume that the molecules existed nonetheless—if they didn’t ex-ist, they did what we might call subsist (I agree). This is similar to CR’s transitive (ex-ist) and intransitive (subsist)—with one major exception: as noted, IT is panpsychic—epistemology and ontology—consciousness and being—cannot be torn asunder. What we call “pre-human ontology” is actually a pre-human sentient holon’s epistemic-ontic Wholeness, and not merely a disembodied, floating, “view-from-nowhere” ontology. A molecule’s prehension-knowing-proto-feeling is an inseparable part of its being-ontological makeup at the molecular level, and both are necessary to co-create each other. Ignoring prehension (and consciousness) just leaves ontology-being for the molecule, and epistemology-consciousness is just given to humans (or higher mammals), not to all sentient beings—they only get being, not knowing. But if a human consciousness-knowing is not involved in co-creating the ontology of atoms, molecules, or cells, their own consciousness-prehension is involved, all the way down (a la Peirce and Whitehead).

Further, when we actually get down to explaining what this subsistence reality is—the “real”—it changes with each new structure (red, amber, orange, green, etc.). What we glibly call “atoms” ex-ist at orange; those become sub-subatomic particles at green (mesons, bosons, gluons, etc.); those become 8-fold-way quarks at teal; those become 11-dimensional strings at turquoise. We can’t say what the atomic level is except from some structure of being-consciousness, and each structure discloses a new ontology, a new world. (That ontology is there, is real, but is co-created by the prehensive holons at that level.) Again, this is not to reduce ontology to epistemology, but rather claim they are complementary aspects of the same Whole occasion. (In short, I disagree with both Kant and Bhaskar—or I agree with them both, depending on how you look at it.)

This reminds me of Varela and Maturana’s brilliant analysis of the world (the “reality”) of a frog. Prior to Varela and Maturana, most biologists followed some form of eco-systems theory and described the reality of the frog as existing in various systems of nature. But Varela and Maturana pointed out that that was actually what the frog’s reality looked like from the scientist’s point of view, but not from the frog’s. The frog’s “view from within” (zone #1) consisted only of various patches of color and motion, smells and sounds; it did not have the cognitive capacity to stand outside itself and picture the entire system of which it was a part—only the scientist did that (using zone #8). Reality, for the frog, was the immediate view from zone #1, and the best the scientist could do was attempt to capture that using zone #5—a 3p x 1-p x 3p—namely, the objective scientist, while studying an objective organism (3p), attempts to take the organism’s “view from within” or “biological phenomenology” (1-p)—two phrases Varela often used. Varela pointed out that this “view from within” was not the actual 1st-person view of the frog itself that the scientist is directly observing (that would be the frog’s zone #1), but the exterior version of the frog’s inner view (or zone #5; i.e., the view from the inside of the UR, not the inside of the UL). The point is that the frog enacts its own reality—its own epistemology or consciousness brings forth and co-creates its own ontology or world (the closest to which the scientist can get is zone #5)—and the scientist himself likewise enacts, or can enact, his own view of the frog’s reality, which many scientists believe is generally a systems view (#zone 8), but more truthfully is a zone #5 version. But in both cases, the being and knowing are two dimensions of the same actual occasion, whatever it is. But merely using a systems view is a deeply anthropocentric view of the frog’s real world, and claiming to know the frog’s actual world (zone #1) by using the scientist’s tools (zone #8) does grave violence to the frog’s actual interior.

Thus, according to IT, the level of the “real” described by CR doesn’t exist as CR describes it. Rather, in IT’s view, in actuality it is either the product of both the prehensive-feeling-knowing plus holonic-being-isness of each of the holons at the particular level of the real being described (e.g., quarks, atoms, molecules, genetics) and their relations—all of which are tetra-enacted and tetra-evolved; and/or it is the result of the way the world emerges and is tetra-enacted at and from a particular level of consciousness-being (e.g., turquoise) of the scientist. In the latter case, the real is not created by its mere description by the particular level of consciousness-being, but rather actually emerges as a level of the real with the emergence of the deep structures of the particular level of being-consciousness. (Again, these levels of being-consciousness are not just levels of human beings but levels of the real Kosmos.) These levels of being-consciousness (red, amber, orange, green, turquoise, et.) are not different interpretations of a one, single, pregiven reality or world, but are themselves actually different worlds in deep structure (an infrared world, a red world, an amber world, an orange world, a green world, a turquoise world, etc., each of which is composed of Nature’s or Kosmic habits tetra-created by the sentient holons at those levels, as are atomic, molecular, cellular, etc. worlds).

The deep structures of these worlds are the nondual epistemic-ontic Whole occasions, but this doesn’t prevent them from being fallible when it comes to humans’ attempts at disclosing and discovering and describing the real characteristics of the Whole; i.e., the surface epistemic-ontic approaches are fallible (which is one of the reasons that multiple methodologies—epistemologies that co-enact and co-create correlative ontologies—and vice versa)—are so important: the more methodologies used, the likelier the deeper Wholeness (the deeper unity of being-consciousness) will be accurately disclosed and enacted in more of its dimensions.

These deep features of the real are—a la Peirce—not eternal pregiven realities of a one world, but Nature’s habits that have been engraved in the universe through the interaction of semiotic-sentient beings (that go all the way down—including quarks and atoms—which is why there are proto-conscious-feeling-knowing beings present from the start to actually create habits—they are living and conscious beings capable of forming habits!—instead of prehension-free ontologies that have no living choices, and thus must blindly obey laws, something both Peirce and I, among others, find unintelligible. Further, according to Peirce, it is the fact that each semiotic being—all the way down—has in its tripartite makeup an interpretant that means the holon’s being is determined in part by interpretation, all the way down—and this, he says, is “inescapable”).

Which brings us to another point. Originally, CR was created as a way to explain and justify the results of scientific experiments (as Karl Popper asked, paraphrasing, “How is it that science actually works? It works because there is a real ontology that can rebuff it”). But it is not clear at all that the types of realities disclosed by science and scientific experiments are the same ones that work with morals, hermeneutics, aesthetics, and introspection, to name a few of the multiple methodologies that exist out there and address different object domains and zones. To claim that only scientific experiments give “real” results is perilously close to scientism, and simply adding other disciplines on top of science is actually to reduce those dimensions to merely scientific methodology itself. Reducing all dimensions to science certainly strikes me as being far from an integral move. I am much more satisfied with the (at least) 8 fundamental methodologies that disclose different object domains (and whose injunctions or paradigms enact or bring forth or co-create those various domains, which, again, are not just lying around out there waiting to be stumbled on by a scientific methodology—that belief is what Sellars calls “the myth of the given.”)

(More recently, Bhaskar has introduced spiritual realities and consciousness into his scheme. But dumping consciousness on top of an ontological scheme that was developed without it is, well, cheating. The whole scheme has to be done over, using consciousness as an intrinsic part of the scheme from the very beginning, and not simply importing it after the scheme has been developed without it. The chances that the scheme will have anything real to do with actual consciousness is slim indeed, as consciousness becomes a dues ex machina to the main frame.)

Finally, I would be remiss if I didn’t at least briefly mention the claims made on behalf of Quantum Mechanics (QM), which has, if nothing else, been taken as the most successfully precise scientific model ever invented (one estimate put it at a million times more precise than Newtonian physics). The central concern of QM centers around what is called the “collapse of the wave packet” (which means, simplistically, this: around 1925-6, both Heisenberg and Schroedinger came up with a set of mathematical equations describing the existence of a subatomic particle. Heisenberg’s was a complicated S‑matrix equation, and Schroedinger’s a simpler calculus wave. They were quickly shown to be interchangeable in results, and thus Schroedinger’s wave equation, being the simpler of the two, soon became the standard form of QM—“the collapse of the wave packet” refers to the collapse of Schroedinger’s wave equation version). Max Planck (who had introduced the quantum revolution in 1905 by suggesting that energy does not come in a continuum but rather exists in discrete packets or quanta) noticed that if you take the square of the results of the Schroedinger equation, you would get the probability of the specific location (and/or a set of other characteristics) of the particle in question (but you get only two characteristics at a time—and—the catch—the more you find of one, the less you can find of the other). The results of this inability to determine both variables was able to be put in a precise form as what became famously known as the Heisenberg Uncertainty Principle, which basically brought an end to strict causality in the physical sciences (and presumably removed “causality” from the Realists level of the “real”). But the real kicker came from the fact that, prior to actually measuring the particle to gain some information about it, the particle existed only as a probability—you literally couldn’t say it existed or it didn’t exist. Moreover, the type of measurement that you performed on the particle determined the type of being that you actually evoked—different measuring methods gave you different beings with different qualities. This lead John Wheeler to say that we lived in a “participatory-observation” universe. QM has now been found applicable in scales from the very smallest to the very largest, as well as in brain interactions, biology, etc., and remains, for what it does, “the most successful physical theory of all time.”

What is remarkable about this theory is how firmly it unites epistemology and ontology—the two, in fact, co-evoke each other. A different epistemology brings forth a different ontology, and a different ontology will correlate with a specific and different epistemology—each of them, as it were, bringing forth the correlative dimension (or co-creating it).

I don’t want to over-emphasize the role of QM in Integral Theory. I do want to point out, however, that—starting with Karl Popper—the role of science in CR has been pervasive, but science has been changing in profound ways that CR seems not to have kept up with. If ever there was a case of “means of knowing” governing in many ways “modes of being,” QM is it, undeniably. And given that QM is the most successful physical theory in history, one’s “ontology” should probably line up with it.

I might mention that it’s not just the existence of the 4 quadrants that is important—many theorists include the 4 quadrants—but rather their being 4 different dimensions of the same occasion, moment to moment, that is distinctive with IT. The 4 quadrants, further, go all the way down, and this means that consciousness itself goes all the way down, as in intrinsic part of the very fabric of the Kosmos itself. This is what sets Integral Theory apart from so many other theories. Aspects of consciousness—which itself is primarily an opening or clearing in which subjective and objective phenomena can emerge—include:

—creativity (as part of the very opening in which newness and novelty can appear, and the means by which it can appear)

—an automatic epistemic-prehension of the preceding moment (which co-creates or helps bring forth the being or ontology of the present moment—its being “grasped” is what brings it forth, and its being prehended by an interpretant, a la Peirce, is what gives the unavoidable interpretive twist to its being)

—while, at the same time, the include part (of transcend and include) means the previous moment, once subject but now object of the new subject, is included or literally taken into the being of the new subject, thus altering the new subject’s very being or ontology in the specific act of inclusion—again, epistemology-consciousness and holonic-being are co-creative and co-determining as two aspects of the Whole real occasion. Sucking epistemic-consciousness-feeling out of the holon, leaving only its dead and denuded being or ontology is effectively to kill the being in question, and anthropocentrically to transfer all the epistemic-knowing-feeling-consciousness dimensions to humans alone, who then propose theories about this denuded level of being that they call “the real.” This is tragic.

—also, as regards the “include” part of “transcend and include”—while the transcend part is Eros, or Spirit-in-action (or Spirit-in-self-organization), and is injecting Spiritual creativity into every moment (thus making evolution “self-organization through self-transcendence,” as Erich Jantsch put it)—while that is happening, the include part is taking care of those aspects generally known as “causality” and induction. If the degree of creativity or novelty in a holon-being is extremely small (as with, say, a quark), then the previous moment’s including component will be by far the strongest determinant of the new subject, and the new subject will seem completely deterministic (having little creativity to counter the causality). But Whitehead points out that no being’s creativity is absolutely zero, only vanishingly small, and thus strict determinism or strict causality doesn’t exist (the same as maintained by QM). Further, the higher on the Great Nest that a holon appears, the more novelty and creativity it possesses—so a physicist can predict where Uranus will be, more or less, a 1000 years from now, but no biologist can tell you where my dog will be 1 minute from now. But for those holon-beings with little creativity, the “transcend and include” mechanics accounts for an answer to Hume’s critique of both causality and induction (i.e., accounts for their existence, even as both become less and less the higher the degree of development and evolution).

I do want to repeat that there is much in CR that I appreciate. I particularly appreciate having an ally against the relativism of extreme postmodernism (even if, alas, I still find problems in how CR goes about doing this, by ripping consciousness out of the Kosmos and leaving “the real” to be merely a denuded “ontology”). But its heart is in the right place, one might say, and Bhaskar himself is a truly extraordinary human being, and everything a philosopher should be, in my humble opinion (it reminds me, somewhat grandiosely, I guess, of what Habermas said about Foucault after their famous meeting—“He’s a real philosopher”—praise indeed from Habermas). The funny thing is, several theorists have pointed out how CR and IT can be brought into general (and even quite close) agreement, with a few fundamental changes: me, accept ontology as “the real”; and CR, accepting epistemic-ontic as correlative dimensions of the same actual Wholeness of sentient holons going all the way down. As I read CR, I keep seeing it subtly—very subtly—reducing everything to ultimate anchorage in the essentially prehension-free Right-Hand quadrants (and I’m sure CR sees IT as subtly reducing everything to the Left-Hand quadrants). But my position is, and remains, that all 4 quadrants are equally real, equally present, tetra-enacting, and tetra-evolving, and anything less than that (along with levels, lines, states, and types, fulcrums and switch-points, Integral Methodological Pluralism, and Integral Post-Metaphysics) can scarcely be called “integral.”
Read the other two excerpts, beginning here.

Monday, July 09, 2012

Michael Shermer - What Happens to Consciousness When We Die

In this recent article from Scientific American, Michael Shermer debunks one of the most annoying arguments I see in the New Age people (like Deepak Chopra, who is the target in this article) - the anthropocentric that consciousness is fundamental to the existence of the universe.

The logic gap in this perspective is astounding (and Shermer doesn't even get into that part of it) - the belief that human consciousness, which has been around a mere 500,000 years (and that's being extremely generous) is necessary for the existence of the universe, which has been here for 15 billion years, is mind-bogglingly incoherent to me.

What Happens to Consciousness When We Die

The death of the brain means subjective experiences are neurochemistry


 
Image: Brian Cairns

Where is the experience of red in your brain? The question was put to me by Deepak Chopra at his Sages and Scientists Symposium in Carlsbad, Calif., on March 3. A posse of presenters argued that the lack of a complete theory by neuroscientists regarding how neural activity translates into conscious experiences (such as redness) means that a physicalist approach is inadequate or wrong. The idea that subjective experience is a result of electrochemical activity remains a hypothesis, Chopra elaborated in an e-mail. It is as much of a speculation as the idea that consciousness is fundamental and that it causes brain activity and creates the properties and objects of the material world.

Where is Aunt Millie's mind when her brain dies of Alzheimer's? I countered to Chopra. Aunt Millie was an impermanent pattern of behavior of the universe and returned to the potential she emerged from, Chopra rejoined. In the philosophic framework of Eastern traditions, ego identity is an illusion and the goal of enlightenment is to transcend to a more universal nonlocal, nonmaterial identity.

The hypothesis that the brain creates consciousness, however, has vastly more evidence for it than the hypothesis that consciousness creates the brain. Damage to the fusiform gyrus of the temporal lobe, for example, causes face blindness, and stimulation of this same area causes people to see faces spontaneously. Stroke-caused damage to the visual cortex region called V1 leads to loss of conscious visual perception. Changes in conscious experience can be directly measured by functional MRI, electroencephalography and single-neuron recordings. Neuroscientists can predict human choices from brain-scanning activity before the subject is even consciously aware of the decisions made. Using brain scans alone, neuroscientists have even been able to reconstruct, on a computer screen, what someone is seeing.

Thousands of experiments confirm the hypothesis that neurochemical processes produce subjective experiences. The fact that neuroscientists are not in agreement over which physicalist theory best accounts for mind does not mean that the hypothesis that consciousness creates matter holds equal standing. In defense, Chopra sent me a 2008 paper published in Mind and Matter by University of California, Irvine, cognitive scientist Donald D. Hoffman: Conscious Realism and the Mind-Body Problem.
Conscious realism asserts that the objective world, i.e., the world whose existence does not depend on the perceptions of a particular observer, consists entirely of conscious agents. Consciousness is fundamental to the cosmos and gives rise to particles and fields. It is not a latecomer in the evolutionary history of the universe, arising from complex interactions of unconscious matter and fields, Hoffman writes. Consciousness is first; matter and fields depend on it for their very existence.

Where is the evidence for consciousness being fundamental to the cosmos? Here Hoffman turns to how human observers construct the visual shapes, colors, textures and motions of objects. Our senses do not construct an approximation of physical reality in our brain, he argues, but instead operate more like a graphical user interface system that bears little to no resemblance to what actually goes on inside the computer. In Hoffman's view, our senses operate to construct reality, not to reconstruct it. Further, it does not require the hypothesis of independently existing physical objects.

How does consciousness cause matter to materialize? We are not told. Where (and how) did consciousness exist before there was matter? We are left wondering. As far as I can tell, all the evidence points in the direction of brains causing mind, but no evidence indicates reverse causality. This whole line of reasoning, in fact, seems to be based on something akin to a God of the gaps argument, where physicalist gaps are filled with nonphysicalist agents, be they omniscient deities or conscious agents.

No one denies that consciousness is a hard problem. But before we reify consciousness to the level of an independent agency capable of creating its own reality, let's give the hypotheses we do have for how brains create mind more time. Because we know for a fact that measurable consciousness dies when the brain dies, until proved otherwise, the default hypothesis must be that brains cause consciousness. I am, therefore I think.
 
SCIENTIFIC AMERICAN ONLINE
Comment on this article at ScientificAmerican.com/jul2012

Friday, July 06, 2012

Jeremy Trombley - Three Kinds of Anthropocentrism


This cool article comes from Jeremy Trombley's Struggle Forever blog (A Guide to Utopia). In this post, he outlines three central forms of anthropocentrism - a very useful guide for those of us who tend to find this aspect of spirituality and science rather annoying.


He makes a great point in the final paragraph - when philosophers (and others) are using the term anthropocentrism, they often have not defined the term adequately to be sure both people are discussing the same idea. This post will help with that for those who care.


I propose a fourth category - and probably a too general version of the word's usage - see below.


Anthropocentrisms

Recently there has has been a lot of talk about non-anthropocentrism, and what that would mean for ethics, politics, and philosophy in general.  I think some of the difficulty in agreement comes from the fact that different people have different conceptions of anthropocentrism and therefore different thresholds for what constitutes non-anthropocentrism.  I remember thinking a lot about this during a course I took in the Fall of 2010.  It was a class in environmental ethics, so we discussed anthropocentrism a lot.  What became clear to me through our readings and in our discussions was that my definition of anthropocentrism was markedly different from the conceptions put forward by the authors and my classmates.  The difference made my threshold for accepting a given approach or philosophy as non-anthropocentric somewhat higher than others.  Let me break down a couple of the different approaches to anthropocentrism that I’ve noticed and explain how they affect our reactions to different philosophies.


1) Boundary anthropocentrism – This is, as far as I can tell, the most common approach to anthropocentrism.  It argues that anthropocentric philosophies arbitrarily circumscribe ethical consideration to humans.  Thus an arbitrary boundary is created which limits the ethical consideration that can be given to non-humans.  The solution to this – the way to create a non-anthropocentric approach – is to extend the boundary to encompass non-humans, or at least certain classes of non-humans (i.e. animals).  To take a simple example we can look at the discourse on animal rights.  Early rights theorists limited the ascription of rights to humans – animals simply were not considered to possess inalienable rights, but were treated as utilitarian objects for human consumption.  Animal rights discourse takes the same ethical basis – rights – but extends the boundary of consideration beyond the human such that animals would be thought to have intrinsic value and inalienable rights just as humans do.  The same approach has been used to extend certain rights to ecosystems and other non-human organisms and assemblages.  But it doesn’t have to be rights specifically – it could be any form of ethical argument that’s used for humans (utilitarian, deontological, etc.) that is then extended to non-humans.  Thus, for this type of non-anthropocentric philosopher, the extension of human values to non-human beings is sufficient to create a non-anthropocentric ethics.


2) Agential anthropocentrism – This approach to anthropocentrism is somewhat more stringent than boundary anthropocentrism.  In this approach anthropocentrism is the failure to recognize the active participation of non-humans in the co-construction of relationships.  It’s possible for a philosophy to be non-anthropocentric from a boundary perspective, but still be anthropocentric from an agential perspective.  For example, in a rights based framework, it’s possible to extend rights to animals, but to see them as essentially unable to speak, act, or participate in a relationship themselves.  Thus the extension of rights to animals is a fundamentally human act – that we humans value them, and therefore we ought to give them some ethical consideration.  Instead agential anthropocentrism would recognize that animals, plants, even rocks in some sense contribute to the relationships that we compose with them.  These relationships are often unbalanced simply because we fail to recognize them as active participants and instead treat them as mere matter to be manipulated to our will.  However, it argues that simply extending human values to non-humans is insufficient to overcome that imbalance.  We must instead understand how humans and non-humans relate to one another, how they alter and affect one another, and how they both actively compose those relationships.  Only then can we hope to overcome our anthropocentrism.  This also corresponds to some forms of anti-correlationism, I think, and is the approach I tend to take towards anthropocentrism.


3) Perspectival anthropocentrism – This, I think, is the approach Levi Bryant is advocating, and is even more stringent from what I can tell.  For this approach, anthropocentrism is defined as the inability to see and understand from a non-human perspective how the world is shaped and how they relate to one another.  To use the example Levi was toying with a few weeks back, it’s not enough to extend ethical consideration to a shark, nor is it enough to recognize the shark as an active participant in the co-construction of relationships.  Instead, we must understand the shark’s ethics in order to be non-anthropocentric.  A truly non-anthropocentric ethics would be able to describe the ways in which sharks, worms, jellyfish, bats, iguanas, plants, and maybe even computers, rocks, books, and houses see the world and interact with it ethically.  Such a task is likely to be impossible, and Levi recognizes this, so we content our selves with boundary or agential non-anthropocentrisms, but these will always fall short of the true non-anthropocentric ethics that we need.


I think the differences between these approaches to anthropocentrism make communication between philosophers who follow them difficult to manage.  Often the definition of anthropocentrism, and thus the threshold for non-anthropocentrism, is taken for granted in these debates.  What ends up happening is an argument over how to achieve non-anthropocentrism, when what really needs to take place is a discussion about what exactly we mean when we talk about anthropocentrism and non-anthropocentrism.  I’m not in a position to advocate any of these (though I tend towards the agential approach in practice), but only wanted to point out a discrepancy I’ve seen in these discussions.  Hopefully it makes for better discussion in the long run.


Note: All of this also applies to the concept of ethnocentrism as well, which I take to be a subtype of the broader category of anthropocentrism.  Also, these names (boundary, agential, and perspectival) are not ideal – they’re the best I could come up with in my morning haze.  If anyone wants to suggest better terms, I would wholeheartedly approve. 
My offering:

Consciousness Anthropocentrism - The perspective that humanity or human consciousness is the
most important species or form of consciousness not only on Earth, but for the entire Kosmos (also making sense of manifest reality and the universe only through that human perspective). This version of anthropocentrism is central to many forms of New Age spirituality (including variations of Integral Theory), and even to some schools of Buddhism (see B Alan Wallace's Hidden Dimensions: The Unification of Physics and Consciousness). In this view, human consciousness is necessary (and sufficient?) for the existence of known universe. The strong anthropic principle (SAP) is similar but not identical, offering the belief that "the Universe is compelled, in some sense, for conscious life to eventually emerge." SAP is foundational for notions of intelligent design.

Sunday, October 30, 2011

Deepak Chopra and Leonard Mlodinow: War of the Worldviews

Via FORA.tv:


 
Deepak Chopra and Leonard Mlodinow: War of the Worldviews from Sixth and I Historic Synagogue on FORA.tv


In War of the Worldviews: Science vs. Spirituality, the two bestselling authors debate the most fundamental questions of human existence.


How did the universe begin? Where did life come from? Is there design in nature?


Without defending organized religion, Chopra asserts that there is design in the universe and a deep intelligence behind the rise of life. Mlodinow, CalTech physicist and the writing collaborator of Stephen Hawking, argues for the viewpoint of science, specifically of modern quantum physics.


War of the Worldviews opens the public's eyes to the fascinating frontier where knowledge and mystery converge and every assumption about life, God, and the universe are open to debate. Program moderated by Timothy Shriver, Chairman and CEO of the Special Olympics.


Deepak Chopra
Deepak Chopra is the author of more than fifty books translated into more than thirty-five languages. Dr. Chopra is a fellow of the American College of Physicians, a member of the American Association of Clinical Endocrinologists, adjunct professor at the Kellogg School of Management, and a senior scientist with the Gallup Organization. He is founder and president of the Alliance for a New Humanity.


Time magazine heralds Deepak Chopra as one of the top 100 heroes and icons of the century and credits him as "the poet–prophet of alternative medicine."


Leonard Mlodinow
Physicist and author Leonard Mlodinow explores the extraordinary extent to which randomness, chance and probability influence and shape our work and everyday lives. Mlodinow was a writer for the television series MacGyver and Star Trek: The Next Generation and co-author with Stephen Hawking of the recent best-seller The Grand Design.

Thursday, September 01, 2011

On Point - David Deutsch And The Beginning of Infinity

I've been meaning to post this for a while now, but it got lost among the multitude of tabs I always seem to have open. In this episode, quantum physicist and philosopher David Deutsch speaks with Tom Ashbrook, for the On Point radio show, about his new book, The Beginning of Infinity: Explanations That Transform the World.

This is interesting stuff - but like many other similar theories, it strikes me as highly anthropocentric, and in a universe as large and unknown to us as ours, it's hubris to think we are the most important entities in the universe (or multiverse).
David Deutsch And The Beginning of Infinity

Listen to this story
 
We’re talking about the scientific revolution and humanity’s place in the universe with David Deutsch, Oxford don who’s been called the founding father of quantum computing.

This composite image provided by NASA, shows a galaxy where a recent supernova probably resulted in a black hole in the bright white dot near the bottom middle of the picture. (AP)
This composite image provided by NASA, shows a galaxy where a recent supernova probably resulted in a black hole in the bright white dot near the bottom middle of the picture. (AP)

Quantum computing genius and Oxford don David Deutsch is a thinker of such scale and audaciousness he can take your breath away. His bottom line is simple and breathtaking all at once.

It’s this: human beings are the most important entities in the universe. Or as Deutsch might have it, in the “multiverse.” For eons, little changed on this planet, he says. Progress was a joke. But once we got the Enlightenment and the scientific revolution, our powers of inquiry and discovery became infinite. Without limit.

This hour On Point: David Deutsch and the beginning of infinity.
-Tom Ashbrook

Guests

David Deutsch, quantum physicist and philosopher and author of The Beginning of Infinity.

From Tom’s Reading List

The New Scientist “One of the most remarkable features of science is the contrast between the enormous power of its explanations and the parochial means by which we create them. No human has ever visited a star, yet we look at dots in the sky and know they are distant white-hot nuclear furnaces. Physically, that experience consists of nothing more than brains responding to electrical impulses from our eyes –; which can detect light only when it is inside them. That it was emitted far away and long ago are not things we experience. We know them only from theory.”

The New York Times “David Deutsch’s “Beginning of Infinity” is a brilliant and exhilarating and profoundly eccentric book. It’s about everything: art, science, philosophy, history, politics, evil, death, the future, infinity, bugs, thumbs, what have you. And the business of giving it anything like the attention it deserves, in the small space allotted here, is out of the question. But I will do what I can.”

TED Talk “People have always been “yearning to know” – what the stars are; cavemen probably wanted to know how to draw better. But for the better part of human experience, we were in a “protracted stagnation” – we wished for, and failed, in progress.”

Excerpt From The Beginning of Infinity

Introduction
Progress that is both rapid enough to be noticed and stable enough to continue over many generations has been achieved only once in the history of our species. It began at approximately the time of the scientific revolution, and is still under way. It has included improvements not only in scientific understanding, but also in technology, political institutions, moral values, art, and every aspect of human welfare.

Whenever there has been progress, there have been influential thinkers who denied that it was genuine, that it was desirable, or even that the concept was meaningful. They should have known better. There is indeed an objective difference between a false explanation and a true one, between chronic failure to solve a problem and solving it, and also between wrong and right, ugly and beautiful, suffering and its alleviation – and thus between stagnation and progress in the fullest sense.

In this book I argue that all progress, both theoretical and practical, has resulted from a single human activity: the quest for what I call good explanations. Though this quest is uniquely human, its effectiveness is also a fundamental fact about reality at the most impersonal, cosmic level – namely that it conforms to universal laws of nature that are indeed good explanations. This simple relationship between the cosmic and the human is a hint of a central role of people in the cosmic scheme of things.

Must progress come to an end – either in catastrophe or in some sort of completion – or is it unbounded? The answer is the latter. That unboundedness is the ‘infinity’ referred to in the title of this book. Explaining it, and the conditions under which progress can and cannot happen, entails a journey through virtually every fundamental field of science and philosophy. From each such field we learn that, although progress has no necessary end, it does have a necessary beginning: a cause, or an event with which it starts, or a necessary condition for it to take off and to thrive. Each of these beginnings is ‘the beginning of infinity’ as viewed from the perspective of that field. Many seem, superficially, to be unconnected. But they are all facets of a single attribute of reality, which I call the beginning of infinity.

The Reach of Explanations
Behind it all is surely an idea so simple, so beautiful, that when we grasp it — in a decade, a century, or a millennium — we will all say to each other, how could it have been otherwise?
~John Archibald Wheeler, Annals of the New York Academy of Sciences, 480 (1986)
To unaided human eyes, the universe beyond our solar system looks like a few thousand glowing dots in the night sky, plus the faint, hazy streaks of the Milky Way. But if you ask an astronomer what is out there in reality, you will be told not about dots or streaks, but about stars: spheres of incandescent gas millions of kilometres in diameter and light years away from us. You will be told that the sun is a typical star, and looks different from the others only because we are much closer to it — though still some 150 million kilometres away. Yet, even at those unimaginable distances, we are confident that we know what makes stars shine: you will be told that they are powered by the nuclear energy released by transmutation — the conversion of one chemical element into another (mainly hydrogen into helium).

Some types of transmutation happen spontaneously on Earth, in the decay of radioactive elements. This was first demonstrated in 1901, by the physicists Frederick Soddy and Ernest Rutherford, but the concept of transmutation was ancient. Alchemists had dreamed for centuries of transmuting ‘base metals’, such as iron or lead, into gold. They never came close to understanding what it would take to achieve that, so they never did so. But scientists in the twentieth century did. And so do stars, when they explode as supernovae. Base metals can be transmuted into gold by stars, and by intelligent beings who understand the processes that power stars, but by nothing else in the universe.

As for the Milky Way, you will be told that, despite its insubstantial appearance, it is the most massive object that we can see with the naked eye: a galaxy that includes stars by the hundreds of billions, bound by their mutual gravitation across tens of thousands of light years. We are seeing it from the inside, because we are part of it. You will be told that, although our night sky appears serene and largely changeless, the universe is seething with violent activity. Even a typical star converts millions of tonnes of mass into energy every second, with each gram releasing as much energy as an atom bomb. You will be told that within the range of our best telescopes, which can see more galaxies than there are stars in our galaxy, there are several supernova explosions per second, each briefly brighter than all the other stars in its galaxy put together. We do not know where life and intelligence exist, if at all, outside our solar system, so we do not know how many of those explosions are horrendous tragedies. But we do know that a supernova devastates all the planets that may be orbiting it, wiping out all life that may exist there — including any intelligent beings, unless they have technology far superior to ours. Its neutrino radiation alone would kill a human at a range of billions of kilometres, even if that entire distance were filled with lead shielding. Yet we owe our existence to supernovae: they are the source, through transmutation, of most of the elements of which our bodies, and our planet, are composed.

There are phenomena that outshine supernovae. In March 2008 an X-ray telescope in Earth orbit detected an explosion of a type known as a ‘gamma-ray burst’, 7.5 billion light years away. That is halfway across the known universe. It was probably a single star collapsing to form a black hole — an object whose gravity is so intense that not even light can escape from its interior. The explosion was intrinsically brighter than a million supernovae, and would have been visible with the naked eye from Earth — though only faintly and for only a few seconds, so it is unlikely that anyone here saw it. Supernovae last longer, typically fading on a timescale of months, which allowed astronomers to see a few in our galaxy even before the invention of telescopes.

Another class of cosmic monsters, the intensely luminous objects known as quasars, are in a different league. Too distant to be seen with the naked eye, they can outshine a supernova for millions of years at a time. They are powered by massive black holes at the centres of galaxies, into which entire stars are falling — up to several per day for a large quasar — shredded by tidal effects as they spiral in. Intense magnetic fields channel some of the gravitational energy back out in the form of jets of high-energy particles, which illuminate the surrounding gas with the power of a trillion suns.

Conditions are still more extreme in the black hole’s interior (within the surface of no return known as the ‘event horizon’), where the very fabric of space and time may be being ripped apart. All this is happening in a relentlessly expanding universe that began about fourteen billion years ago with an all-encompassing explosion, the Big Bang, that makes all the other phenomena I have described seem mild and inconsequential by comparison. And that whole universe is just a sliver of an enormously larger entity, the multiverse, which includes vast numbers of such universes.

The physical world is not only much bigger and more violent than it once seemed, it is also immensely richer in detail, diversity and incident. Yet it all proceeds according to elegant laws of physics that we understand in some depth. I do not know which is more awesome: the phenomena themselves or the fact that we know so much about them.

How do we know? One of the most remarkable things about science is the contrast between the enormous reach and power of our best theories and the precarious, local means by which we create them. No human has ever been at the surface of a star, let alone visited the core where the transmutation happens and the energy is produced. Yet we see those cold dots in our sky and know that we are looking at the white-hot surfaces of distant nuclear furnaces. Physically, that experience consists of nothing other than our brains responding to electrical impulses from our eyes. And eyes can detect only light that is inside them at the time. The fact that the light was emitted very far away and long ago, and that much more was happening there than just the emission of light — those are not things that we see. We know them only from theory.

Scientific theories are explanations: assertions about what is out there and how it behaves. Where do these theories come from? For most of the history of science, it was mistakenly believed that we ‘derive’ them from the evidence of our senses — a philosophical doctrine known as empiricism:

Empiricism

For example, the philosopher John Locke wrote in 1689 that the mind is like ‘white paper’ on to which sensory experience writes, and that that is where all our knowledge of the physical world comes from. Another empiricist metaphor was that one could read knowledge from the ‘Book of Nature’ by making observations. Either way, the discoverer of knowledge is its passive recipient, not its creator.

But, in reality, scientific theories are not ‘derived’ from anything. We do not read them in nature, nor does nature write them into us. They are guesses — bold conjectures. Human minds create them by rearranging, combining, altering and adding to existing ideas with the intention of improving upon them. We do not begin with ‘white paper’ at birth, but with inborn expectations and intentions and an innate ability to improve upon them using thought and experience. Experience is indeed essential to science, but its role is different from that supposed by empiricism. It is not the source from which theories are derived. Its main use is to choose between theories that have already been guessed. That is what ‘learning from experience’ is.

However, that was not properly understood until the mid twentieth century with the work of the philosopher Karl Popper. So historically it was empiricism that first provided a plausible defence for experimental science as we now know it. Empiricist philosophers criticized and rejected traditional approaches to knowledge such as deference to the authority of holy books and other ancient writings, as well as human authorities such as priests and academics, and belief in traditional lore, rules of thumb and hearsay. Empiricism also contradicted the opposing and surprisingly persistent idea that the senses are little more than sources of error to be ignored. And it was optimistic, being all about obtaining new knowledge, in contrast with the medieval fatalism that had expected everything important to be known already. Thus, despite being quite wrong about where scientific knowledge comes from, empiricism was a great step forward in both the philosophy and the history of science. Nevertheless, the question that sceptics (friendly and unfriendly) raised from the outset always remained: how can knowledge of what has not been experienced possibly be ‘derived’ from what has? What sort of thinking could possibly constitute a valid derivation of the one from the other? No one would expect to deduce the geography of Mars from a map of Earth, so why should we expect to be able to learn about physics on Mars from experiments done on Earth? Evidently, logical deduction alone would not do, because there is a logical gap: no amount of deduction applied to statements describing a set of experiences can reach a conclusion about anything other than those experiences.

The conventional wisdom was that the key is repetition: if one repeatedly has similar experiences under similar circumstances, then one is supposed to ‘extrapolate’ or ‘generalize’ that pattern and predict that it will continue. For instance, why do we expect the sun to rise tomorrow morning? Because in the past (so the argument goes) we have seen it do so whenever we have looked at the morning sky. From this we supposedly ‘derive’ the theory that under similar circumstances we shall always have that experience, or that we probably shall. On each occasion when that prediction comes true, and provided that it never fails, the probability that it will always come true is supposed to increase. Thus one supposedly obtains ever more reliable knowledge of the future from the past, and of the general from the particular. That alleged process was called ‘inductive inference’ or ‘induction’, and the doctrine that scientific theories are obtained in that way is called inductivism. To bridge the logical gap, some inductivists imagine that there is a principle of nature — the ‘principle of induction’ — that makes inductive inferences likely to be true. ‘The future will resemble the past’ is one popular version of this, and one could add ‘the distant resembles the near,’ ‘the unseen resembles the seen’ and so on.

But no one has ever managed to formulate a ‘principle of induction’ that is usable in practice for obtaining scientific theories from experiences. Historically, criticism of inductivism has focused on that failure, and on the logical gap that cannot be bridged. But that lets inductivism off far too lightly. For it concedes inductivism’s two most serious misconceptions.

First, inductivism purports to explain how science obtains predictions about experiences. But most of our theoretical knowledge simply does not take that form. Scientific explanations are about reality, most of which does not consist of anyone’s experiences. Astrophysics is not primarily about us (what we shall see if we look at the sky), but about what stars are: their composition and what makes them shine, and how they formed, and the universal laws of physics under which that happened. Most of that has never been observed: no one has experienced a billion years, or a light year; no one could have been present at the Big Bang; no one will ever touch a law of physics — except in their minds, through theory. All our predictions of how things will look are deduced from such explanations of how things are. So inductivism fails even to address how we can know about stars and the universe, as distinct from just dots in the sky.

The second fundamental misconception in inductivism is that scientific theories predict that ‘the future will resemble the past’, and that ‘the unseen resembles the seen’ and so on. (Or that it ‘probably’ will.) But in reality the future is unlike the past, the unseen very different from the seen. Science often predicts — and brings about — phenomena spectacularly different from anything that has been experienced before. For millennia people dreamed about flying, but they experienced only falling. Then they discovered good explanatory theories about flying, and then they flew — in that order. Before 1945, no human being had ever observed a nuclear-fission (atomic-bomb) explosion; there may never have been one in the history of the universe. Yet the first such explosion, and the conditions under which it would occur, had been accurately predicted — but not from the assumption that the future would be like the past. Even sunrise — that favourite example of inductivists — is not always observed every twenty-four hours: when viewed from orbit it may happen every ninety minutes, or not at all. And that was known from theory long before anyone had ever orbited the Earth.