Pages

Wednesday, June 01, 2011

Discover - Why Did Consciousness Evolve, and How Can We Modify It? (2 Parts)

Malcolm MacIver posted this two-part article at the Discover Magazine site on the evolution and possible of future of human consciousness. It appears he plans a third post in this series, so these two are the introduction.

To be clear - I'm agnostic about his claims. I hold consciousness more as a process than a feature of the human mind that can be manipulated. Obviously, processes can be altered and changed - anyone who has done drugs knows that to be true.

But this article is proposing a version of transhumanism, and as I have mentioned many times, I want to see how this is going to work.

Why Did Consciousness Evolve, and How Can We Modify It?

Update 5/24/11: The conversation continues in Part II here.

I recently gave a talk at the Directors Guild of America as part of a panel on the “Science of Cyborgs” sponsored by the Science Entertainment Exchange. It was a fun time, and our moderators, Josh Clark and Chuck Bryant from the HowStuffWorks podcast, emceed the evening with just the right measure of humor and cultural insight. In my twelve minutes, I shared a theory of how consciousness evolved. My point was that if we understand the evolutionary basis of consciousness, maybe this will help us envision new ways our consciousness might evolve further in the future. That could be fun in terms of dreaming up new stories. I also believe that part of what inhibits us from taking effective action against long-term problems—like the global environmental crisis — may be found in the evolutionary origins of our ability to be aware.

This idea is so simple that I’m surprised I’ve not yet been able to find it already in circulation.

The idea is this: back in our watery days as fish, we lived in a medium that was inherently unfriendly to seeing things very far away. The technical way this is measured is the “attenuation length’’ of light through the medium. After light travels the attenuation length through a medium, about 63% of the light is blocked. The attenuation length of light in water is on the order of tens of meters. For a beast of a meter or two in length, which moves at a rate of about a body length or two per second, that’s a pretty short horizon of time and space. In just a few seconds, you’ll reach the edge of where you were able to see. If you’re down in the depths at all, or in less clear water, you may reach the edge of your perceptual horizon in about a second.

Think about that: life is coming at you at such a rate that every second unfolds a whole new tableau of potentially deadly threats, or prey you must grab in order to survive. Given such a scenario, we need to have highly reactive nervous systems, just like we revert to when we find ourselves driving in a fog or at night along a dark and winding road. The problem is that there was no respite from this fog. It was an unalterable fact of how light moves through water, relative to our own movement abilities and size.

But then, about 350 million years ago in the Devonian Period, animals like Tiktaalik started making their first tentative forays onto land. From a perceptual point of view, it was a whole new world. You can see things, roughly speaking, 10,000 times better. So, just by the simple act of poking their eyes out of the water, our ancestors went from the mala vista of a fog to a buena vista of a clear day, where they could survey things out for quite a considerable distance.

This puts the first such members of the “buena vista sensing club” into a very interesting position, from an evolutionary perspective. Think of the first animal that gains whatever mutation it might take to disconnect sensory input from motor output (before this point, their rapid linkage was necessary because of the need for reactivity to avoid becoming lunch). At this point, they can potentially survey multiple possible futures and pick the one most likely to lead to success. For example, rather than go straight for the gazelle and risk disclosing your position too soon, you may choose to stalk slowly along a line of bushes (wary that your future dinner is also seeing 10,000 times better than its watery ancestors) until you are much closer. Here’s an illustration of the two scenarios:

On the left, we have the situation when the distance we sense is close to the distance we will move in our reaction time (our reaction time is about 1/3 of a second; from that point to when we will stop is a bit longer– like those diagrams you see of stopping distance when driving at night show). There isn’t a whole lot of space to plan over. On the right, we can fit three very different plans to get to our prey: b1-b3, among others.

So what does this have to do with consciousness?

In 1992, psychologist Bruce Bridgeman wrote that “Consciousness is the operation of the plan-executing mechanism, enabling behavior to be driven by plans rather than immediate environmental contingencies.” No theory of consciousness is likely to account for all of its varied senses, but at least in terms of consciousness-as-operation-of-the-plan-executing-mechanism, due to some very simple “facts of light,” dwelling on land may have been a necessary condition for giving us the ability to survey the contents of our mind. “Buena vista consciousness,” for lack of a better term, might have been the first kind of consciousness that selection pressures could have brought about.

Given this picture of how a certain kind of consciousness came about, what are the knobs we might twiddle, either for the love of story making, or so that our transhumanist future selves might be conscious in a different way?

Let me borrow a moral quandary from philosopher James Rachels. Maybe you’re eating a sandwich right now. There is a child, far away, who is not, and who is about to die for lack of food. Surely, if that child were beside you, you would share your sandwich. But, then, what’s keeping you from sharing that sandwich anyway? The shipping costs? That’s easily avoided – we find someone on the ground who can buy the sandwich locally. If you think through the various possibilities, the only answer you eventually come to is that the starving child is too far removed from your state of awareness to really matter to you. Likewise with any number of a host of environmental devastations that are going on at this moment.

So, what if we massively expanded the blue space in the picture above, our sensorium? I don’t mean watch video of distant places (which surely is part of the way), but use artificial retina technology to directly pipe visual images from a disconnected place directly into your brain? Say, of the rain forest that is currently being destroyed so that an industrial meat producer in Peru can provide fast food chains in our country with low cost beef? This would be disruptive technology on a big scale.

Here’s another thought experiment: Notice that there is only one being in the pictures above. Consciousness does seem to be for one being at a time. What if we reengineer things so that we see what others in our group see, or so that when you do something good, the entire group feels good, rather than just you? This kind of consciousness has been explored in science fiction (The Borg on TV), and in art (Mathieu Brand’s Ubiq). We even know mechanisms of how something like the hive mind of bees work, such as regulation of the division of labor through various genes and hormones. Could something like this be the antidote to the endemic selfishness of Homo sapiens?

More details on the idea of buena vista consciousness can be found on pages 492-499 of this chapter I wrote in 2009.

UPDATE: A more technical paper describing how to quantify sensory and movement spaces is here.

March 14th, 2011 by Malcolm MacIver

* * * * * *

Why Did Consciousness Evolve, and How Can We Modify It, Pt. II: The Supremacy of Vision


I’m back after a hiatus of a few weeks to catch up on some stuff in the lab and the waning weeks of spring quarter teaching here at Northwestern. In my last post, I put forward an idea about why consciousness– defined in a narrow way as “contemplation of plans” (after Bridgeman)–evolved, and used this idea to suggest some ways we might improve our consciousness in the future through augmentation technology.

Here’s a quick review: Back in our watery days as fish (roughly, 350 million years ago) we were in an environment that was not friendly to sensing things far away. This is because of a hard fact about light in water, which is that our ability to see things at a far distance is drastically compromised by attenuation and scattering of light in water. A useful figure of merit is “attenuation length,” which in water is tens of meters for light, while in air it is tens of ten thousand meters. This is in perfectly clear water –add a bit of algae or other kinds of microorganisms and it goes down dramatically. Roughly speaking, vision in water is similar to driving a car in a fog. Since you’re not seeing very far out, the idea I’ve proposed goes, there is less of an advantage to planning over the space you can sense. On land, you can see a lot further out. Now, if a chance set of mutations gives you the ability to contemplate more than one possible future path through the space ahead, then that mutation is more likely to be selected for.

Over at Cosmic Variance, Sean Carroll wrote a great summary of my post. Between my original post and his, many insightful questions and problems were raised by thoughtful readers.

In the interest of both responding to your comments and encouraging more insightful feedback, I’ll have a couple of further posts on this idea that will explore some of the recurring themes that have cropped up in the comments.

Today, since many commenters raised doubts about my claim that vision on land was key – raising the long distance sensory capabilities of our sense of smell, and hearing, among other points – I thought I’d start with a review of why, among biological senses, only vision (and, to a more limited degree echolocation) is capable of giving access to the detail that could be necessary to having multiple future paths to plan over. Are the other types of sensing that you’ve raised as important as sight?

Having the kind of overview needed for real-time planning of a path to a goal – at least an unpredictable, moving goal like prey – requires being able to access detail over a large amount of space relative to where you are moving in your immediate future. I’ll show why the only types of biological sensing capable of providing this sort of broad overview to animals are sight and echolocation, and why sight is easily the more powerful of the two.

Two important factors determine one’s ability to sense from a distance: resolution (the minimum size an object or a feature can be before you can no longer distinguish it), and range (how far away an object can be detected). Given our particular terrestrial environment, sight wins out over all other types of sensing on both counts.

First, a little bit on our yardsticks. Range designates the maximum typical distances that something is sensed. Resolution is fairly intuitive these days, since many of us have had the experience of working with some image we’ve grabbed from the internet with a resolution that is too low for our needs. You can measure it in a variety of ways, such as how many pixels can be resolved or displayed in a given unit of length. The new iPhone’s “retina” display has a resolution of 300 pixels per inch, for example, and as the publicity has suggested, this is similar to the resolving power of our eyes when the display is held at typical viewing distances.

For biological senses, resolution is partially set by how densely packed the sensory receptors are. For visual systems, the packing density in the fovea (for animals that have them), at the central part of the retina, is extremely high, and the density rapidly diminishes away from the fovea.

But there is another constraint, besides how closely spaced the sensory receptors are: the wavelength of the energy you are sensing the world with. As a first approximation, you cannot resolve objects below the wavelength of the energy you are sensing with. This is true whether you are sensing the consequence of probing the environment with that energy, as in the case of bats and their echolocation, or just passively absorbing the energy emitted by some external object, such as an object reflecting sunlight into your visual system. In the case of vision, the wavelengths are small compared to the packing density of our sensory receptors, so we don’t notice this issue. In the case of probing with sound using an artificial sense (for humans), such as ultrasound, or in the case of echolocation for bats and dolphins, the resolution limits imposed by the energy become more constraining. At 80,000 cycles per second (what some bats use, and four times higher than we can hear), resolution is about one quarter of a centimeter. Dolphins emit at somewhat higher frequencies, but because sound goes about four times faster in water than in air, they end up with a resolution of about 1 centimeter.

With that background on range and resolution, we can ask “what senses provide detailed overviews at far distances (say, at least 100 times longer than your body)?”

Let’s go through some of the biological possibilities: hearing; echolocation, also referred to as sonar (which also involves hearing, but at a much higher frequency, and includes the generation of an echolocation beam); touch; taste; smell; flow sensing (in science referred to as the “mechanosensory lateral line”); sensing of weak electric fields, called “electrosense”; active electrical sensing, called “electrolocation” (similar to normal electrosense, but like echolocation, includes not only perception of electric fields, but generation of them as well—so hearing is to echolocation what electrosense is to electrolocation); magnetosense, the ability to sense Earth’s magnetic field; vision (all types, including polarized light, and ultraviolet). For simplicity, I will consider these one at a time, although in many biological situations, multiple senses would be combined.

Passive hearing: sound can travel a long distance before it can no longer be heard. Underwater, it can travel even further. But there is a problem: hearing can tell you something out there is producing sound (like a screeching animal), but it cannot tell you anything about all the things that are not producing sounds, like the quietly resting boulders nearby the screeching animal or the ferns silently bending in the wind across the stream from said animal. This is in great contrast with vision in daylight: everything that reflects light, which is basically everything, can be seen.

As a consequence, when you hear something, you can get a sense of the direction of the object producing sound, and an estimate of distance. So you can get closer to the thing that produces the sound, but using sound alone, it’s challenging to be clever about how you get closer, since you don’t know anything about the stuff in between you and the thing generating sound (again, we are taking these senses one at a time). If you’ve ever played the Hot and Cold game as a kid, this is similar: the sound gives you enough information to tell if you’re getting hot or cold (approaching or moving away), plus some sense of distance and what kind of object is making the sound.

Active hearing (echolocation, or sonar): echolocation has many of the benefits of vision, but without requiring light. Bats and dolphins generate echolocation pulses which travel out and then return after being reflected by nearby objects. By moving the parts of their body that generate the echolocation pulse (mouth or nose), they can “scan” their environment. However, both resolution and range is significantly worse than in the case of vision, at least on land. We already went through resolution limits of echolocation. In terms of the range of echolocation, in water it is quite good – up to one hundred meters for the kinds of objects dolphins hunt for — far better than vision in water. It’s interesting that a mammal, that may have been used to large visual ranges on land prior to going back to the ocean, came up with a style of sensing that gives you the best long distance sensing in water. Due to more rapid attenuation of high frequencies in air, bats have a shorter range – on the order a few meters for their prey.

The primary reason for the short range of echolocation systems is that their probe signal falls off with the fourth power of distance. This means that in order to double the range of an echolocation system, you need 16 times more power. Obtaining large ranges with echolocation, therefore, runs into energy consumption issues, and limits to the loudness of sounds that can be generated before damage to tissue ensues.

Touch/taste: This one is easy. While for small insects and rodents, touch appendages can reach out for a good fraction of body length, one body length is about the maximum for the length of things like whiskers and antennae before they become unwieldy. Taste sensors are on the body surface or on things like the tongue, so like touch, isn’t great for sensing at a distance.

Smell: Like passive hearing, the sense of smell can have fantastic range (sharks can smell injured prey from 5 km; male moths can find female moths at up to 10 km). But once again, it only tells you about things emitting odors. This allows you to approach them (if you are lucky with respect to environmental conditions), but you can’t use smell for a detailed overview of the space ahead. It’s fun imagining what would be needed in order to have smell work this way. Every object would need to be emitting a distinct odor, and downstream, these odors would have to stay relatively separated. Then, by scanning your nose through the odor array, you might be able to obtain an “olfactograph” of the space ahead!

Flow sensing: Fish and some other aquatic animals possess special sense organs for detecting flows due to the movement of other animals. This can guide predatory strikes. Seals have been demonstrated to be able to follow flows made by fish after some time has elapsed. In general, however, flow sensing is very “near field”, operating on the range of a body length or two at most.

Passive electrosense. Because all animals in water generate a weak bioelectric field, the ability to detect these fields evolved very early in the history of animals. They are found, for example, in the most ancient vertebrate that still exists, the lamprey (so old it doesn’t even have a jaw). Many other aquatic animals have them as well, such as sharks. The detection of external bioelectric fields occurs at very near range, about a body length or two.

Active electrosense (electrolocation). In active electrical sensing (also called electrolocation), an animal detects how its environment is modulating a self-generated weak electric field. In my doctoral work, I showed that it is effective at less than a body length for prey-like objects, and perhaps a few body lengths for larger objects. Like echolocation, the fall off of active electrosense is with the fourth power of distance, so it rapidly becomes prohibitive to sense at a distance.

Magnetic field sensing: Certain animals have been shown to detect the direction of Earth’s magnetic field. This is very useful for navigation. It should be clear, however, that it will not, in any circumstance, provide a detailed overview of the space ahead.

Vision: Given our relatively transparent environment, illuminated for at least a portion of the day with loads of light from the sun (about a thousand watts of light per square meter on a clear day at noon, a typical “radiant flux density” at the surface of Earth), vision reigns king as a system for imaging. It’s true that some land environments are dense enough to make vision nearly as short as it is in water – but in places like tidal flats, savannah, and prairie, being able to see far ahead pays big dividends.

Because of the high velocity of the electromagnetic radiation vision uses, the resolution limit for visible light is much, much smaller than our ability to perceive, because the distance between our sensory organs for light is quite large compared to the wavelength of light (for example 500 billionths of a meter is one of the wavelengths we see with). As a consequence, as the distance between receptors of the eye has decreased, and our optical abilities along with it, we are able to resolve a sixtieth of one degree with our visual systems. That means we can see a rabbit at a bit over half a mile, an astonishing capability compared to how far out our water-based ancestors could sense.

In contrast, as my original post mentioned, because of the “attenuation length” of light in water, the distance at which 63% of the light from an object is absorbed by the water, is on the order of tens of meters in perfectly clear water. So light from the sun has to go down into water, thereby losing 63% of its intensity after tens of meters – and then reflect off an object, and get to your eye, again losing 63% of its intensity in some tens of meters. In costal waters or anywhere the water is a bit cloudy with phytoplankton or algae, attenuation length is ten times less – going down to meters. No matter what you do with your sensors and optics, this is going to result in significantly diminishing returns to see things further away.

On land, the attenuation length for light in air is on the order of 100 km. This is similar to the attenuation length of sound in water, which is why dolphins and whales do so well with echolocation underwater (but still, for dolphins only on the order of 100 meters for prey-sized objects).

That finishes our survey of what senses are good for quickly accessing points in a big amount of space. To sum up: to sense something means you need to detect energy emanating from the object. Some things, like sounds or odors emitted by animals or environmental phenomena, are sparsely distributed (not every point in your surroundings is emitting the energy), and this feature enables us to find the croaking frog or cracking branch.

But, in such situations, because our ability to sense these objects depends to some extent on the surrounding objects NOT emitting any such energy, it is not possible to get a detailed point by point sensation of a large amount of space. In contrast, with vision, echolocation, and active electrosense, energy is delivered to all objects of interest. So, you can sense them, whether or not they emit any kind of energy on their own. As such, only these senses (and similar ones) have the capacity to provide detailed point-by-point overviews. Of these, vision on land is by far the most powerful, in part just because there is an intense amount of energy being delivered by our Sun for at least a portion of the day, and easily delivered by artificial means otherwise; and in part, because the short wavelength means that vision systems can perceive with unparalleled acuity.

In the next post, I’ll explore the connection between having a big amount of space at hand, and planning to an unpredictable, moving goal, like another animal you’re hoping to dine on. I’ll argue that such planning requires you to have a big chunk of space at the beck and call of your sensory system, relative to the space you’re about to move into.

Image by Malcolm A. MacIver

Correction: In the original post I stated “Dolphins emit at somewhat higher frequencies, but because light goes about four times faster in water than in air, they end up with a resolution of about 1 centimeter.” Thanks to @Kees for pointing out my mistake – I meant that sound goes four times faster in water.

May 23rd, 2011 by Malcolm MacIver

No comments:

Post a Comment