Pages

Sunday, July 20, 2014

Donald D. Hoffman and Chetan Prakash - Objects of Consciousness


This is an interesting, although obtuse, article (from Frontiers in Psychology: Perception Science) that seeks to argue that, due to evolutionary biology and quantum physics, our current models of object perception require fundamental reformulation. This is a very long article, and it includes a lot of mathematics supposedly demonstrating the power of conscious agents to dynamically interact, and how the perception of objects and space-time can emerge from such dynamics.

I could not post the sections with the math due to limitations of the blogger platform, so here is the first third or so of the article.

Considering that the authors thank Deepak Chopra and Stuart Hameroff in the acknowledgments, I have a hard time taking this article too seriously. Here for example is one quote that made me stop short:
When Gerald Edelman claimed, for instance, that “There is now a vast amount of empirical evidence to support the idea that consciousness emerges from the organization and operation of the brain” he assumed that the brain exists when unperceived (Edelman, 2004).
So, the brain doesn't exist if there is no one to observe it? Bullshit. This kind of lazy thinking, if it's fair to call it "thinking," is why researchers outside the mainstream are so rarely taken seriously.

Be that as it may - there are some interesting ideas in this piece, if you are interested.

Full Citation: 
Hoffman, DD, and Prakash, C. (2014, Jun 17). Objects of consciousness. Frontiers in Psychology: Perception Science; 5:577. doi: 10.3389/fpsyg.2014.00577

Objects of consciousness

Donald D. Hoffman [1] and Chetan Prakash [2]
1. Department of Cognitive Sciences, University of California, Irvine, CA, USA
2. Department of Mathematics, California State University, San Bernardino, CA, USA

Abstract


Current models of visual perception typically assume that human vision estimates true properties of physical objects, properties that exist even if unperceived. However, recent studies of perceptual evolution, using evolutionary games and genetic algorithms, reveal that natural selection often drives true perceptions to extinction when they compete with perceptions tuned to fitness rather than truth: Perception guides adaptive behavior; it does not estimate a preexisting physical truth. Moreover, shifting from evolutionary biology to quantum physics, there is reason to disbelieve in preexisting physical truths: Certain interpretations of quantum theory deny that dynamical properties of physical objects have definite values when unobserved. In some of these interpretations the observer is fundamental, and wave functions are compendia of subjective probabilities, not preexisting elements of physical reality. These two considerations, from evolutionary biology and quantum physics, suggest that current models of object perception require fundamental reformulation. Here we begin such a reformulation, starting with a formal model of consciousness that we call a “conscious agent.” We develop the dynamics of interacting conscious agents, and study how the perception of objects and space-time can emerge from such dynamics. We show that one particular object, the quantum free particle, has a wave function that is identical in form to the harmonic functions that characterize the asymptotic dynamics of conscious agents; particles are vibrations not of strings but of interacting conscious agents. This allows us to reinterpret physical properties such as position, momentum, and energy as properties of interacting conscious agents, rather than as preexisting physical truths. We sketch how this approach might extend to the perception of relativistic quantum objects, and to classical objects of macroscopic scale.

Introduction

The human mind is predisposed to believe that physical objects, when unperceived, still exist with definite shapes and locations in space. The psychologist Piaget proposed that children start to develop this belief in “object permanence” around 9 months of age, and have it firmly entrenched just 9 months later (Piaget, 1954). Further studies suggest that object permanence starts as early as 3 months of age (Bower, 1974; Baillargeon and DeVos, 1991).
Belief in object permanence remains firmly entrenched into adulthood, even in the brightest of minds. Abraham Pais said of Einstein, “We often discussed his notions on objective reality. I recall that on one walk Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it” (Pais, 1979). Einstein was troubled by interpretations of quantum theory that entail that the moon does not exist when unperceived.
Belief in object permanence underlies physicalist theories of the mind-body problem. When Gerald Edelman claimed, for instance, that “There is now a vast amount of empirical evidence to support the idea that consciousness emerges from the organization and operation of the brain” he assumed that the brain exists when unperceived (Edelman, 2004). When Francis Crick asserted the “astonishing hypothesis” that “You're nothing but a pack of neurons” he assumed that neurons exist when unperceived (Crick, 1994).
Object permanence underlies the standard account of evolution by natural selection. As James memorably put it, “The point which as evolutionists we are bound to hold fast to is that all the new forms of being that make their appearance are really nothing more than results of the redistribution of the original and unchanging materials. The self-same atoms which, chaotically dispersed, made the nebula, now, jammed and temporarily caught in peculiar positions, form our brains” (James, 1890). Evolutionary theory, in the standard account, assumes that atoms, and the replicating molecules that they form, exist when unperceived.
Object permanence underlies computational models of the visual perception of objects. David Marr, for instance, claimed “We … very definitely do compute explicit properties of the real visible surfaces out there, and one interesting aspect of the evolution of visual systems is the gradual movement toward the difficult task of representing progressively more objective aspects of the visual world” (Marr, 1982). For Marr, objects and their surfaces exist when unperceived, and human vision has evolved to describe their objective properties.
Bayesian theories of vision assume object permanence. They model object perception as a process of statistical estimation of object properties, such as surface shape and reflectance, that exist when unperceived. As Alan Yuille and Heinrich Bülthoff put it, “We define vision as perceptual inference, the estimation of scene properties from an image or sequence of images … ” (Yuille and Bülthoff, 1996).
There is a long and interesting history of debate about which properties of objects exist when unperceived. Shape, size, and position usually make the list. Others, such as taste and color, often do not. Democritus, a contemporary of Socrates, famously claimed, “by convention sweet and by convention bitter, by convention hot, by convention cold, by convention color; but in reality atoms and void” (Taylor, 1999).
Locke proposed that “primary qualities” of objects, such as “bulk, figure, or motion” exist when unperceived, but that “secondary properties” of objects, such as “colors and smells” do not. He then claimed that “… the ideas of primary qualities of bodies are resemblances of them, and their patterns do really exist in the bodies themselves, but the ideas produced in us by these secondary qualities have no resemblance of them at all” (Locke, 1690).
Philosophical and scientific debate continues to this day on whether properties such as color exist when unperceived (Byrne and Hilbert, 2003; Hoffman, 2006). But object permanence, certainly regarding shape and position, is so deeply assumed by the scientific literature in the fields of psychophysics and computational perception that it is rarely discussed.
It is also assumed in the scientific study of consciousness and the mind-body problem. Here the widely acknowledged failure to create a plausible theory forces reflection on basic assumptions, including object permanence. But few researchers in fact give it up. To the contrary, the accepted view is that aspects of neural dynamics—from quantum-gravity induced collapses of wavefunctions at microtubules (Hameroff, 1998) to informational properties of re-entrant thalamo-cortical loops (Tononi, 2004)—cause, or give rise to, or are identical to, consciousness. As Colin McGinn puts it, “we know that brains are the de facto causal basis of consciousness, but we have, it seems, no understanding whatever of how this can be so” (McGinn, 1989).

Evolution and Perception

The human mind is predisposed from early childhood to assume object permanence, to assume that objects have shapes and positions in space even when the objects and space are unperceived. It is reasonable to ask whether this assumption is a genuine insight into the nature of objective reality, or simply a habit that is perhaps useful but not necessarily insightful.
We can look to evolution for an answer. If we assume that our perceptual and cognitive capacities have been shaped, at least in part, by natural selection, then we can use formal models of evolution, such as evolutionary game theory (Lieberman et al., 2005; Nowak, 2006) and genetic algorithms (Mitchell, 1998), to explore if, and under what circumstances, natural selection favors perceptual representations that are genuine insights into the true nature of the objective world.
Evaluating object permanence on evolutionary grounds might seem quixotic, or at least unfair, given that we just noted that evolutionary theory, as it's standardly described, assumes object permanence (e.g., of DNA and the physical bodies of organisms). How then could one possibly use evolutionary theory to test what it assumes to be true?
However, Richard Dawkins and others have observed that the core of evolution by natural selection is an abstract algorithm with three key components: variation, selection, and retention (Dennett, 1995; Blackmore, 1999). This abstract algorithm constitutes a “universal Darwinism” that need not assume object permanence and can be profitably applied in many contexts beyond biological evolution. Thus, it is possible, without begging the question, to use formal models of evolution by natural selection to explore whether object permanence is an insight or not.
Jerry Fodor has criticized the theory of natural selection itself, arguing, for instance, that it impales itself with an intensional fallacy, viz., inferring from the premise that “evolution is a process in which creatures with adaptive traits are selected” to the conclusion that “evolution is a process in which creatures are selected for their adaptive traits” (Fodor and Piattelli-Palmarini, 2010). However, Fodor's critique seems wide of the mark (Futuyma, 2010) and the evidence for evolution by natural selection is overwhelming (Coyne, 2009; Dawkins, 2009).
What, then, do we find when we explore the evolution of perception using evolutionary games and genetic algorithms? The standard answer, at least among vision scientists, is that we should find that natural selection favors veridical perceptions, i.e., perceptions that accurately represent objective properties of the external world that exist when unperceived. Steven Palmer, for instance, in a standard graduate-level textbook, states that “Evolutionarily speaking, visual perception is useful only if it is reasonably accurate … Indeed, vision is useful precisely because it is so accurate. By and large, what you see is what you get. When this is true, we have what is called veridical perception … perception that is consistent with the actual state of affairs in the environment. This is almost always the case with vision … ” (Palmer, 1999).
The argument, roughly, is that those of our predecessors whose perceptions were more veridical had a competitive advantage over those whose perceptions were less veridical. Thus, the genes that coded for more veridical perceptions were more likely to propagate to the next generation. We are, with good probability, the offspring of those who, in each succeeding generation, perceived more truly, and thus we can be confident that our own perceptions are, in the normal case, veridical.
The conclusion that natural selection favors veridical perceptions is central to current Bayesian models of perception, in which perceptual systems use Bayesian inference to estimate true properties of the objective world, properties such as shape, position, motion, and reflectance (Knill and Richards, 1996; Geisler and Diehl, 2003). Objects exist and have these properties when unperceived, and the function of perception is to accurately estimate pre-existing properties.
However, when we actually study the evolution of perception using Monte Carlo simulations of evolutionary games and genetic algorithms, we find that natural selection does not, in general, favor perceptions that are true reports of objective properties of the environment. Instead, it generally favors perceptual strategies that are tuned to fitness (Mark et al., 2010; Hoffman et al., 2013; Marion, 2013; Mark, 2013).
Why? Several principles emerge from the simulations. First, there is no free information. For every bit of information one obtains about the external world, one must pay a price in energy, e.g., in calories expended to obtain, process and retain that information. And for every calorie expended in perception, one must go out and kill something and eat it to get that calorie. So natural selection tends to favor perceptual systems that, ceteris paribus, use fewer calories. One way to use fewer calories is to see less truth, especially truth that is not informative about fitness.
Second, for every bit of information one obtains about the external world, one must pay a price in time. More information requires, in general, more time to obtain and process. But in the real world where predators are on the prowl and prey must be wary, the race is often to the swift. It is the slower gazelle that becomes lunch for the swifter cheetah. So natural selection tends to favor perceptual systems that, ceteris paribus, take less time. One way to take less time is, again, to see less truth, especially truth that is not informative about fitness.
Third, in a world where organisms are adapted to niches and require homeostatic mechanisms, the fitness functions guiding their evolution are generally not monotonic functions of structures or quantities in the world. Too much salt or too little can be devastating; something in between is just right for fitness. The same goldilocks principle can hold for water, altitude, humidity, and so on. In these cases, perceptions that are tuned to fitness are ipso facto not tuned to the true structure of the world, because the two are not monotonically related; knowing the truth is not just irrelevant, it can be inimical, to fitness.
Fourth, in the generic case where noise and uncertainty are endemic to the perceptual process, a strategy that estimates a true state of the world and then uses the utility associated to that state to govern its decisions must throw away valuable information about utility. It will in general be driven to extinction by a strategy that does not estimate the true state of the world, and instead uses all the information about utility (Marion, 2013).
Fifth, more complex perceptual systems are more difficult to evolve. Monte Carlo simulations of genetic algorithms show that there is a combinatorial explosion in the complexity of the search required to evolve more complex perceptual systems. This combinatorial explosion itself is a selection pressure toward simpler perceptual systems.
In short, natural selection does not favor perceptual systems that see the truth in whole or in part. Instead, it favors perceptions that are fast, cheap, and tailored to guide behaviors needed to survive and reproduce. Perception is not about truth, it's about having kids. Genes coding for perceptual systems that increase the probability of having kids are ipso facto the genes that are more likely to code for perceptual systems in the next generation.

The Interface Theory of Perception

Natural selection favors perceptions that are useful though not true. This might seem counterintuitive, even to experts in perception. Palmer, for instance, in the quote above, makes the plausible claim that “vision is useful precisely because it is so accurate” (Palmer, 1999). Geisler and Diehl agree, taking it as obvious that “In general, (perceptual) estimates that are nearer the truth have greater utility than those that are wide of the mark” (Geisler and Diehl, 2002). Feldman also takes it as obvious that “it is clearly desirable (say from an evolutionary point of view) for an organism to achieve veridical percepts of the world” (Feldman, 2013). Knill and Richards concur that vision “… involves the evolution of an organism's visual system to match the structure of the world … ” (Knill and Richards, 1996).
This assumption that perceptions are useful to the extent that they are true is prima facie plausible, and it comports well with the assumption of object permanence. For if our perceptions report to us a three-dimensional world containing objects with specific shapes and positions, and if these perceptual reports have been shaped by evolution to be true, then we can be confident that those objects really do, in the normal case, exist and have their positions and shapes even when unperceived.
So we find it plausible that perceptions are useful only if true, and we find it deeply counterintuitive to think otherwise. But studies with evolutionary games and genetic algorithms flatly contradict this deeply held assumption. Clearly our intuitions need a little help here. How can we try to understand perceptions that are useful but not true?
Fortunately, developments in computer technology have provided a convenient and helpful metaphor: the desktop of a windows interface (Hoffman, 1998, 2009, 2011, 2012, 2013; Mausfeld, 2002; Koenderink, 2011a; Hoffman and Singh, 2012; Singh and Hoffman, 2013). Suppose you are editing a text file and that the icon for that file is a blue rectangle sitting in the lower left corner of the desktop. If you click on that icon you can open the file and revise its text. If you drag that icon to the trash, you can delete the file. If you drag it to the icon for an external hard drive, you can create a backup of the file. So the icon is quite useful.
But is it true? Well, the only visible properties of the icon are its position, shape, and color. Do these properties of the icon resemble the true properties of the file? Clearly not. The file is not blue or rectangular, and it's probably not in the lower left corner of the computer. Indeed, files don't have a color or shape, and needn't have a well-defined position (e.g., the bits of the file could be spread widely over memory). So to even ask if the properties of the icon are true is to make a category error, and to completely misunderstand the purpose of the interface. One can reasonably ask whether the icon is usefully related to the file, but not whether it truly resembles the file.
Indeed, a critical function of the interface is to hide the truth. Most computer users don't want to see the complexity of the integrated circuits, voltages, and magnetic fields that are busy behind the scenes when they edit a file. If they had to deal with that complexity, they might never finish their work on the file. So the interface is designed to allow the user to interact effectively with the computer while remaining largely ignorant of its true architecture.
Ignorant, also, of its true causal structure. When a user drags a file icon to an icon of an external drive, it looks obvious that the movement of the file icon to the drive icon causes the file to be copied. But this is just a useful fiction. The movement of the file icon causes nothing in the computer. It simply serves to guide the user's operation of a mouse, triggering a complex chain of causal events inside the computer, completely hidden from the user. Forcing the user to see the true causal chain would be an impediment, not a help.
Turning now to apply the interface metaphor to human perception, the idea is that natural selection has not shaped our perceptions to be insights into the true structure and causal nature of objective reality, but has instead shaped our perceptions to be a species-specific user interface, fashioned to guide the behaviors that we need to survive and reproduce. Space and time are the desktop of our perceptual interface, and three-dimensional objects are icons on that desktop.
Our interface gives the impression that it reveals true cause and effect relations. When one billiard ball hits a second, it certainly looks as though the first causes the second to careen away. But this appearance of cause and effect is simply a useful fiction, just as it is for the icons on the computer desktop.
There is an obvious rejoinder: “If that cobra is just an icon of your interface with no causal powers, why don't you grab it by the tail?” The answer is straightforward: “I don't grab the cobra for the same reason I don't carelessly drag my file icon to the trash—I could lose a lot of work. I don't take my icons literally: The file, unlike its icon, is not literally blue or rectangular. But I do take my icons seriously.”
Similarly, evolution has shaped us with a species-specific interface whose icons we must take seriously. If there is a cliff, don't step over. If there is a cobra, don't grab its tail. Natural selection has endowed us with perceptions that function to guide adaptive behaviors, and we ignore them at our own peril.
But, given that we must take our perceptions seriously, it does not follow that we must take them literally. Such an inference is natural, in the sense that most of us, even the brightest, make it automatically. When Samuel Johnson heard Berkeley's theory that “To be is to be perceived” he kicked a stone and said, “I refute it thus!” (Boswell, 1986) Johnson observed that one must take the stone seriously or risk injury. From this Johnson concluded that one must take the stone literally. But this inference is fallacious.
One might object that there still is an important sense in which our perceptual icon of, say, a cobra does resemble the true objective reality: The consequences for an observer of grabbing the tail of the cobra are precisely the consequences that would obtain if the objective reality were in fact a cobra. Perceptions and internal information-bearing structures are useful for fitness-preserving or enhancing behavior because there is some mutual information between the predicted utility of a behavior (like escaping) and its actual utility. If there's no mutual information and no mechanism for increasing mutual information, fitness is low and stays that way. Here we use mutual information in the sense of standard information theory (Cover and Thomas, 2006).
This point is well-taken. Our perceptual icons do give us genuine information about fitness, and fitness can be considered an aspect of objective reality. Indeed, in Gibson's ecological theory of perception, our perceptions primarily resonate to “affordances,” those aspects of the objective world that have important consequences for fitness (Gibson, 1979). While we disagree with Gibon's direct realism and denial of information processing in perception, we agree with his emphasis on the tuning of perception to fitness.
So we must clarify the relationship between truth and fitness. In evolutionary theory it is as follows. If W denotes the objective world then, for a fixed organism, state, and action, we can think of a fitness function to be a function f:W → [0,1], which assigns to each state w of W a fitness value f(w). If, for instance, the organism is a hungry cheetah and the action is eating, then f might assign a high fitness value to world state w in which fresh raw meat is available; but if the organism is a hungry cow then f might assign a low fitness value to the same state w.
If the true probabilities of states in the world are given by a probability measure m on W, then one can define a new probability measure mf on W, where for any event A of W, mf(A) is simply the integral of f over A with respect to m; mf must of course be normalized so that mf(W) = 1.
And here is the key point. A perceptual system that is tuned to maximize the mutual information with m will not, in general, maximize mutual information with mf (Cover and Thomas, 2006). Being tuned to truth, i.e., maximizing mutual information with m, is not the same as being tuned to fitness, i.e., maximizing mutual information with mf. Indeed, depending on the fitness function f, a perceptual system tuned to truth might carry little or no information about fitness, and vice versa. It is in this sense that the interface theory of perception claims that our perceptions are tuned to fitness rather than truth.
There is another rejoinder: “The interface metaphor is nothing new. Physicists have told us for more than a century that solid objects are really mostly empty space. So an apparently solid stone isn't the true reality, but its atoms and subatomic particles are.” Physicists have indeed said this since Rutherford published his theory of the atomic nucleus in 1911 (Rutherford, 1911). But the interface metaphor says something more radical. It says that space and time themselves are just a desktop, and that anything in space and time, including atoms and subatomic particles, are themselves simply icons. It's not just the moon that isn't there when one doesn't look, it's the atoms, leptons and quarks themselves that aren't there. Object permanence fails for microscopic objects just as it does for macroscopic.
This claim is, to contemporary sensibilities, radical. But there is a perspective on the intellectual evolution of humanity over the last few centuries for which the interface theory seems a natural next step. According to this perspective, humanity has gradually been letting go of the false belief that the way H. sapiens sees the world is an insight into objective reality.
Many ancient cultures, including the pre-Socratic Greeks, believed the world was flat, for the obvious reason that it looks that way. Aristotle became persuaded, on empirical grounds, that the earth is spherical, and this view gradually spread to other cultures. Reality, we learned, departed in important respects from some of our perceptions.
But then a geocentric model of the universe, in which the earth is at the center and everything revolves around it, still held sway. Why? Because that's the way things look to our unaided perceptions. The earth looks like it's not moving, and the sun, moon, planets, and stars look like they circle a stationary earth. Not until the work of Copernicus and Kepler did we recognize that once again reality differs, in important respects, from our perceptions. This was difficult to swallow. Galileo was forced to recant in the Vatican basement, and Giordano Bruno was burned at the stake. But we finally, and painfully, accepted the mismatch between our perceptions and certain aspects of reality.
The interface theory entails that these first two steps were mere warm up. The next step in the intellectual history of H. sapiens is a big one. We must recognize that all of our perceptions of space, time and objects no more reflect reality than does our perception of a flat earth. It's not just this or that aspect of our perceptions that must be corrected, it is the entire framework of a space-time containing objects, the fundamental organization of our perceptual systems, that must be recognized as a mere species-specific mode of perception rather than an insight into objective reality.
By this time it should be clear that, if the arguments given here are sound, then the current Bayesian models of object perception need more than tinkering around the edges, they need fundamental transformation. And this transformation will necessarily have ramifications for scientific questions well-beyond the confines of computational models of object perception.
One example is the mind-body problem. A theory in which objects and space-time do not exist unperceived and do not have causal powers, cannot propose that neurons—which by hypothesis do not exist unperceived and do not have causal powers—cause any of our behaviors or conscious experiences. This is so contrary to contemporary thought in this field that it is likely to be taken as a reductio of the view rather than as an alternative direction of inquiry for a field that has yet to construct a plausible theory.

Definition of Conscious Agents

If our reasoning has been sound, then space-time and three-dimensional objects have no causal powers and do not exist unperceived. Therefore, we need a fundamentally new foundation from which to construct a theory of objects. Here we explore the possibility that consciousness is that new foundation, and seek a mathematically precise theory. The idea is that a theory of objects requires, first, a theory of subjects.
This is, of course, a non-trivial endeavor. Frank Wilczek, when discussing the interpretation of quantum theory, said, “The relevant literature is famously contentious and obscure. I believe it will remain so until someone constructs, within the formalism of quantum mechanics, an “observer,” that is, a model entity whose states correspond to a recognizable caricature of conscious awareness … That is a formidable project, extending well-beyond what is conventionally considered physics” (Wilczek, 2006).
The approach we take toward constructing a theory of consciousness is similar to the approach Alan Turing took toward constructing a theory of computation. Turing proposed a simple but rigorous formalism, now called the Turing machine (Turing, 1937; Herken, 1988). It consists of six components: (1) a finite set of states, (2) a finite set of symbols, (3) a special blank symbol, (4) a finite set of input symbols, (5) a start state, (6) a set of halt states, and (7) a finite set of simple transition rules (Hopcroft et al., 2006).
Turing and others then conjectured that a function is algorithmically computable if and only if it is computable by a Turing machine. This “Church-Turing Thesis” can't be proven, but it could in principle be falsified by a counterexample, e.g., by some example of a procedure that everyone agreed was computable but for which no Turing machine existed. No counterexample has yet been found, and the Church-Turing thesis is considered secure, even definitional.
Similarly, to construct a theory of consciousness we propose a simple but rigorous formalism called a conscious agent, consisting of six components. We then state the conscious agent thesis, which claims that every property of consciousness can be represented by some property of a conscious agent or system of interacting conscious agents. The hope is to start with a small and simple set of definitions and assumptions, and then to have a complete theory of consciousness arise as a series of theorems and proofs (or simulations, when complexity precludes proof). We want a theory of consciousness qua consciousness, i.e., of consciousness on its own terms, not as something derivative or emergent from a prior physical world.
No doubt this approach will strike many as prima facie absurd. It is a commonplace in cognitive neuroscience, for instance, that most of our mental processes are unconscious processes (Bargh and Morsella, 2008). The standard account holds that well more than 90% of mental processes proceed without conscious awareness. Therefore, the proposal that consciousness is fundamental is, to contemporary thought, an amusing anachronism not worth serious consideration.
This critique is apt. It's clear from many experiments that each of us is indeed unaware of most of the mental processes underlying our actions and conscious perceptions. But this is no surprise, given the interface theory of perception. Our perceptual interfaces have been shaped by natural selection to guide, quickly and cheaply, behaviors that are adaptive in our niche. They have not been shaped to provide exhaustive insights into truth. In consequence, our perceptions have endogenous limits to the range and complexity of their representations. It was not adaptive to be aware of most of our mental processing, just as it was not adaptive to be aware of how our kidneys filter blood.
We must be careful not to assume that limitations of our species-specific perceptions are insights into the true nature of reality. My friend's mind is not directly conscious to me, but that does not entail that my friend is unconscious. Similarly, most of my mental processes are not directly conscious to me, but that does not entail that they are unconscious. Our perceptual systems have finite capacity, and will therefore inevitably simplify and omit. We are well-advised not to mistake our omissions and simplifications for insights into reality.
There are of course many other critiques of an approach that takes consciousness to be fundamental: How can such an approach explain matter, the fundamental forces, the Big Bang, the genesis and structure of space-time, the laws of physics, evolution by natural selection, and the many neural correlates of consciousness? These are non-trivial challenges that must be faced by the theory of conscious agents. But for the moment we will postpone them and develop the theory of conscious agents itself.
Conscious agent is a technical term, with a precise mathematical definition that will be presented shortly. To understand the technical term, it can be helpful to have some intuitions that motivate the definition. The intuitions are just intuitions, and if they don't help they can be dropped. What does the heavy lifting is the definition itself.
A key intuition is that consciousness involves three processes: perception, decision, and action.
In the process of perception, a conscious agent interacts with the world and, in consequence, has conscious experiences.
In the process of decision, a conscious agent chooses what actions to take based on the conscious experiences it has.
In the process of action, the conscious agent interacts with the world in light of the decision it has taken, and affects the state of the world.
Another intuition is that we want to avoid unnecessarily restrictive assumptions in constructing a theory of consciousness. Our conscious visual experience of nearby space, for instance, is approximately Euclidean. But it would be an unnecessary restriction to require that all of our perceptual experiences be represented by Euclidean spaces.
However it does seem necessary to discuss the probability of having a conscious experience, of making a particular decision, and of making a particular change in the world through action. Thus, it seems necessary to assume that we can represent the world, our conscious experiences, and our possible actions with probability spaces.
We also want to avoid unnecessarily restrictive assumptions about the processes of perception, decision, and action. We might find, for instance, that a particular decision process maximizes expected utility, or minimizes expected risk, or builds an explicit model of the self. But it would be an unnecessary restriction to require this of all decisions.
However, when considering the processes of perception, decision and action, it does seem necessary to discuss conditional probability. It seems necessary, for instance, to discuss the conditional probability of deciding to take a specific action given a specific conscious experience, the conditional probability of a particular change in the world given that a specific action is taken, and the conditional probability of a specific conscious experience given a specific state of the world.
A general way to model such conditional probabilities is by the mathematical formalism of Markovian kernels (Revuz, 1984). One can think of a Markovian kernel as simply an indexed list of probability measures. In the case of perception, for instance, a Markovian kernel might specify that if the state of the world is w1, then here is a list of the probabilities for the various conscious experiences that might result, but if the state of the world is w2, then here is a different list of the probabilities for the various conscious experiences that might result, and so on for all the possible states of the world. A Markovian kernel on a finite set of states can be written as matrix in which the entries in each row sum to 1.
A Markovian kernel can also be thought of as an information channel. Cover and Thomas, for instance, define “a discrete channel to be a system consisting of an input alphabet X and output alphabet Y and a probability transition matrix p(x|y) that expresses the probability of observing the output symbol y given that we send the symbol x” (Cover and Thomas, 2006). Thus, a discrete channel is simply a Markovian kernel.
So, each time a conscious agent interacts with the world and, in consequence, has a conscious experience, we can think of this interaction as a message being passed from the world to the conscious agent over a channel. Similarly, each time the conscious agent has a conscious experience and, in consequence, decides on an action to take, we can think of this decision as a message being passed over a channel within the conscious agent itself. And when the conscious agent then takes the action and, in consequence, alters the state of the world, we can think of this as a message being passed from the conscious agent to the world over a channel. In the discrete case, we can keep track of the number of times each channel is used. That is, we can count the number of messages that are passed over each channel. Assuming that all three channels (perception, decision, action) all work in lock step, we can use one counter, N, to keep track of the number of messages that are passed.
These are some of the intuitions that underlie the definition of conscious agent that we will present. These intuitions can be represented pictorially in a diagram, as shown in Figure 1. The channel P transmits messages from the world W, leading to conscious experiences X. The channel D transmits messages from X, leading to actions G. The channel A transmits messages from G that are received as new states of W. The counter N is an integer that keeps track of the number of messages that are passed on each channel.
FIGURE 1
http://www.frontiersin.org/files/Articles/82279/fpsyg-05-00577-r2/image_m/fpsyg-05-00577-g001.jpg
Figure 1. A diagram of a conscious agent. A conscious agent has six components as illustrated here. The maps P, D, and A can be thought of as communication channels.
In what follows we will be using the notion of a measurable space. Recall that a measurable space, (X, X), is a set X together with a collection X of subsets of X, called events, that satisfies three properties: (1) X is in X; (2) X is closed under complement (i.e., if a set A is in X then the complement of A is also in X); and (3) X is closed under countable union. The collection of events X is a σ-algebra (Athreya and Lahiri, 2006). A probability measure assigns a probability to each event in X.
With these intuitions, we now present the formal definition of a conscious agent where, for the moment, we simply assume that the world is a measurable space (W, W).
* * * * *

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

For helpful discussions and comments on previous drafts we thank Marcus Appleby, Wolfgang Baer, Deepak Chopra, Federico Faggin, Pete Foley, Stuart Hameroff, David Hoffman, Menas Kafatos, Joachim Keppler, Brian Marion, Justin Mark, Jeanric Meller, Julia Mossbridge, Darren Peshek, Manish Singh, Kyle Stephens, and an anonymous reviewer.

References available at the Frontiers site

No comments:

Post a Comment