by Patricia S. Churchland, The Righteous Mind: Why Good People Are Divided by Politics and Religion by Jonathan Haidt, Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind by Robert Kurzban, and Who’s in Charge? Free Will and the Science of the Brain by Michael S. Gazzaniga.
Notably, he did not include Sam Harris's Free Will, a book that despite its popularity is not in the same league as the four under review. Michael Gazzaniga's Who's in Charge? is a much better book on the topic of free will.
Iain DeWitt is a doctoral candidate in the department of neuroscience at Georgetown University.
IAIN DEWITTFrom the March/April 2013 issue
Braintrust: What Neuroscience Tells Us about Morality
by Patricia S. Churchland
Princeton University Press, 2011, 288 pp., $24.95
The Righteous Mind: Why Good People Are Divided by Politics and Religion
by Jonathan Haidt
Pantheon Books, 2012, 419 pp., $28.95
Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind
by Robert Kurzban
Princeton University Press, 2011, 288 pp., $27.95
Who’s in Charge? Free Will and the Science of the Brain
by Michael S. Gazzaniga
Ecco, 2011, 272 pp., $27.99
In 1690, in An Essay Concerning Human Understanding, John Locke entreated us to consider the mind’s curious “[annexation of] the idea of pain to the motion of a piece of steel dividing our flesh.” Obviously enough, the sensation of pain resembles neither anything in steel nor in its motion. Pain is a sensation of the body, a signal the body’s physical integrity is breached. Locke used this quip to illustrate that perception is not the direct experience of material reality. Rather, external events induce sensations that are a limited interpretation of reality. In 1884,Scientific American asked and answered the famous question, “if a tree were to fall on an uninhabited island, would there be any sound?” In short, no:
Sound is vibration, transmitted to our senses through the mechanism of the ear, and recognized as sound only at our nerve centers. The falling of the tree or any other disturbance will produce vibration of the air. If there be no ears to hear, there will be no sound.1
Absent an observer, sound, per se, does not exist.
That sound is a construct of the organism is also clear from species’ hearing ranges. Dolphins, for instance, hear 150–150,000 Hz oscillations, whereas humans hear in the range of 20–20,000 Hz. We perceive only as much of reality as our mechanisms of transduction, our sensory organs, afford us. The remainder, the un-transduced portion, is lost to oblivion (or to instrumentation). Transduction induces both veridical representation and editorializing on the biological value of events and objects, such as fright at the apprehension of threat. Morality, perhaps counterintuitively, begins with editorialized sensation. To echo Locke, we curiously annex feelings of anger and disgust to the transgressive behavior of others.
In Braintrust, Patricia Churchland, a philosopher at the University of California, San Diego and co-founder of neurophilosophy—the modern incarnation of the philosophy of mind—examines the nature of morality and the biology of sociality. Morality, she offers, is this:
A four-dimensional scheme for social behavior that is shaped by interlocking brain processes: (1) caring (rooted in attachment to kin and kith and care for their well-being), (2) recognition of others’ psychological states (rooted in the benefits of predicting the behavior of others), (3) problem-solving in a social context (e.g., how we should distribute scarce goods, settle land disputes; how we should punish the miscreants), and (4) learning social practices (by positive and negative reinforcement, by imitation, by trial and error, by various kinds of conditioning, and by analogy).
While good, this definition is incomplete. In The Righteous Mind, a book on morality and the evolution of group living, Jonathan Haidt, a psychologist at NYU, expresses a similar view but adds that morality curbs short-term self-interest:
Moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate self-interest and make cooperative societies possible.
Still, something is missing. In Why Everyone (Else) Is a Hypocrite, Robert Kurzban, a psychologist at the University of Pennsylvania, captures the missing element:
One of the most peculiar things about humans is just how much they care about what other humans are up to. In essentially all of the rest of the natural world, unless one organism’s fate is intimately tied to another organism’s decision . . . organisms typically ignore one another. . . . Organisms should be designed . . . to pay attention only to those things that are directly relevant. We’re different. We seem to care a lot about what other humans are up to. And when other people . . . say some particular magical words, or try to sell (or rent) a body part . . . not only do we care, but we insist that they be punished.
Judgment is morality’s essence, sociality its purpose. The integrated thesis is then that a judgmental temperament makes it easier for people to live in larger, denser groups, be they denizens of humanity’s first villages in the Neolithic Levant or present-day Manila with 47,000 inhabitants per square mile.
Omissions aside, Churchland’s definition fits well with evolution and observation. For moral sentiments to have evolved they must enhance survival and fecundity. Their adaptive value is manifest in the influence they exert on social conduct. Moralizing alters risk aversion (trust) and punishment behavior. Game theory tells us each can be adaptive. Trust influences outcomes in Prisoner’s Dilemma scenarios and punishment can reduce free riders in Tragedy-of-the-Commons scenarios. For moralizing to be adaptive, moral sentiments must interface with planning and reasoning circuits. For social reasoning, in turn, to be adaptive, it must possess some theory of mind, succinctly defined as beliefs about others’ beliefs. The centrality of this to morality is illustrated by Common Law: Criminal liability rests on both actus reus, acts that are guilty, and mens rea, thoughts that are guilty. Finally, while moralizing may be innate and hence universal, morality is plastic and hence parochial. As any parent or anthropologist will attest, children are socialized to the mores of their particular culture.
Churchland addresses morality from several perspectives, but her main interest is in the neurobiological foundation of morality. This centers on discussion of oxytocin, a hormone traditionally appreciated for its role in the female reproductive cycle, especially in milk letdown. Beginning in the 1970s, interest grew in this hormone as a regulator of maternal and mate bonding—a love hormone, as it were. In recent years, interest has grown as research has established oxytocin’s role in bonding and its concrete linkages to trusting behavior.
Initial observations that the molecule was important in bonding came from lambs and voles. Injection of oxytocin into the brain of a sexually naive ewe elicits calf bonding and other maternal behaviors. Aware of this, researchers began to investigate the role of oxytocin and vasopressin, a sister molecule, in mate bonding in two species of vole, prairie and montane. Otherwise similar, these species show markedly different mating behavior. Following initial mating, prairie voles bond for life; montane voles don’t. This extends to differences in shared parenting and overall sociability. Prairie voles share parenting and socially aggregate; montane voles don’t. When oxytocin and vasopressin receptors are blocked experimentally, disrupting receptor function, prairie vole social behavior becomes like that of montane voles. This implies oxytocin and vasopressin are directly regulating mate bonding and social aggregation in voles.
In humans, behavioral economics has implicated oxytocin in trust. In a coordination game, when administered oxytocin, people invested more of their wealth with trustees. The game studied was such that the full value of an investment was at risk to the investor but not due to market unpredictability. Rather, the trustee got a guaranteed return. The trustee could then return any amount to the investor, introducing risk. Investors who received oxytocin instead of a placebo invested more on average and on more of the turns. Investors’ decreased risk aversion, attributable to oxytocin, was interpreted as increased trust. Interestingly, participants administered oxytocin are also more likely to be forgiving. That is, they are less likely to adopt risk-averse behavior in response to a breach of trust. Oxytocin might therefore be understood as a hormone that reduces moral vigilance and, thereby, promotes tolerance.
Churchland describes her project as examining the platform upon which morality is constructed. Her thesis is that the platform is maternal attachment to young. The largest single factor in human brain evolution is our exaggerated juvenile phase, during much of which we are helpless. This surely exerted strong selective pressure for parental behavior, care for kin. Churchland argues this is the forerunner of care for kith and strangers. Haidt, drawing from cross-cultural psychology, argues that the normative bedrock is not monolithic. He proposes six innate dimensions about which we are predisposed toward moralizing: harm-care, fairness-cheating, liberty-oppression, loyalty-betrayal, authority-subversion and sanctity-degradation. Churchland dissents, arguing that it is injudicious to expand the fold of primary offenses absent biological evidence—a position that overlooks results from infants and monkeys, which suggest the ethics of fairness and authority may have their own independent origins.
Innateness aside, we clearly judge not only the cruel, but also the unfair, tyrannical, disloyal, disrespectful and “impure.” Further, Haidt’s dimensions capture interesting variance. Genetic factors (like those affecting openness to experience), cultural factors (like sanitary customs) and personal experience appear to interact, setting one’s personal loading on each dimension. Political persuasion, intriguingly, is neatly reflected in these loadings: Liberals weight the three former dimensions more heavily than the latter ones. Conservatives weigh the six rather uniformly. Politics, then, can be aptly described as the teaming of like-minded moralizers in contests for power.
While the propensity to care and the like may be the cornerstone of the moral edifice, absent the propensity to judge, care is a behavior undifferentiated from other activities, like foraging. Haidt presents evidence for the centrality of emotion, not reason, in judgment. His research uses carefully constructed vignettes to probe ethical thinking. His experiments have participants make real-time judgments about scenarios constructed to offend sentiment, like eating a pet, but for which the factual account insulates the protagonists from ordinary lines of condemnation. Attempting to explain their moral judgments, participants find themselves talking in circles, offering rationales that ultimately prove incoherent. From this, Haidt concludes that affective reactions to social situations are prior, in time and causality, to cognitive assessments. That is, extemporaneous verbal explanations of moral values are more akin to post hoc self-interpretation than they are to ethical analysis. Intuition can be misleading.
Michael Gazzaniga, a neuroscientist at the University of California, Santa Barbara, and coiner of the term “cognitive neuroscience”, concludes likewise in Who’s In Charge?, observing that people who cite utilitarian motives for punishment, like deterrence, actually tend to act in accordance with retributivist principles. (This echoes Haidt but also implies retribution may reflect our evolved, if not enlightened, judicial predisposition.) Though extemporaneous explanations of moral belief may be largely rationalization, this need not imply that ethics simply veneer emotions, nor that moral instruction is therefore futile. Rather, emotional reflexes are labile. Social and self-derived feedback, reasoned or otherwise, can modify them.
Haidt links the rationalization of moral sentiment chiefly to external justification, an idea equally developed by Robert Kurzban. Kurzban’s overarching goal is to press readers to consider the modular organization of the mind and its implications for the internal consistency of belief, a phenomenon with obvious implications for morality. To explain self-deception (and many aspects of psychology) it is helpful both to consider the mind in light of evolution and to view it as an assemblage of processing modules.2
It is easiest to appreciate the role of evolution in shaping mentality if we consider animal behavior. For instance, upon taking over a pride, male lions kill all cubs who are not their own. As a result, the lionesses return to a state of sexual receptivity and new copulations occur. Animal behaviorists argue these behaviors are the product of natural selection. If males do not kill the cubs, they have fewer offspring due to female non-receptivity. If the lionesses don’t mate, they too have fewer offspring. Hence, the beastly practice.
Evolutionary psychology has its limits. It can only be applied rigorously to human behaviors that are common enough to be regarded as phenotypic of the species and that can be clearly shown to affect survival and fecundity. This domain, however, is still large. Moralizing behavior, for instance, is a good candidate to have been under selection. Putting empathy and emotion aside for a moment, evolutionary analysis may explain moralizing about infidelity. The essential question is, when is it reproductively advantageous for one party to restrict the sexual behavior of another? For females with high-quality mates, monogamy is clearly advantageous, as the male’s resources are exclusively devoted to the female’s offspring. Low-quality males also benefit from monogamy as, under polygamy, they are not regularly reproducing. From the perspective of any male, the less other males copulate, the better. For high-quality males, provided there remains some opportunity for unpunished, extra-pair copulation, the general proscription of polygamy is not especially costly. Low-quality females, however, lose something. They no longer have an option between sharing in the resources of a high-quality male and having the full resources of a lower-quality male. On the whole, a reasonable case can be made for a selective pressure promoting a species-typical behavior that enforces monogamy within one’s community, a moralization of infidelity.
A difficulty with this line of argument, however, is the math. In the lion example, the math is trivial. Most circumstances for which an application of evolutionary psychology might confidently be assumed require nontrivial calculations. Determining the true direction of selective pressure, like whether or not a moralizing regime for infidelity would be evolutionarily stable in a given population, involves estimating payoffs associated with trait adoption. For this, qualitative statements like “high value” or “some opportunity” must be unpacked, quantified and mathematically related. As this is nontrivial, arbitrating competing claims in evolutionary psychology is challenging. Nonetheless, provided the difficulty of proving claims is appreciated, musing about them can still be fruitful.
An offshoot of the above example is that it might be understandable for people to simultaneously be hypocrites and to abhor hypocrites. Such a person might benefit from the restraint imposed on others through persecution of wickedness while benefiting from being wanton. For an individual, then, it could be advantageous to be inconsistent—to both genuinely believe in the enforcement of proscriptions and to be oblivious to or forgiving of one’s own indiscretions. This presents a bit of a conundrum: How can a mind be made to be useful and yet also be incoherent? The answer is stove-piping or, more technically, information encapsulation, which psychologists call modularity. The idea is that the brain contains multiple networks, modules, whose circuitry is compartmentalized. Each module serves a specialized purpose. What dominates the overall character of the assembly of modules is evolutionary fitness. That evolution has structured its sensory and motor systems into a modular organization is uncontroversial. The extent and character of modularity as it applies to social psychology and higher-order functionality is more debatable, but to some extent modularity exists.
If the mind is modular, why consciousness feels unitary becomes an important question. Gazzaniga offers a solution, an Interpreter module whose function is to weave narrative from observation. His proposal is based on experiments with split-brain epilepsy patients, whose left and right cerebral hemispheres have been surgically disconnected as part of treatment.
In one experiment, Gazzaniga simultaneously presented two images to the patients but, through manipulation of the visual system, each hemisphere of the cerebral cortex saw only one of the images. Patients were asked to respond by pointing to a related picture, one for each image. On one trial, Gazzaniga showed a chicken claw to the left hemisphere and a snow scene to the right. The patient responded by pointing to a chicken with the hand controlled by the left hemisphere and a shovel with the hand controlled by the right. Asked to explain the selections, the patient reported choosing the chicken because it went with the claw. He chose the shovel not because it went with snow but because, “you have to clean out the chicken [shed] with a shovel.” Expressive language is lateralized to the left hemisphere, so it is not surprising that the expressive language areas were able to access sensory memory of the chicken claw. These areas, however, had no access to sensory memory in the right hemisphere, which saw the snow. Interestingly, rather than admitting uncertainty, the left hemisphere confabulated. In the response interval, it observed the selection of the shovel. Then it interpreted the other hemisphere’s behavior, inferring the shovel to have been chosen for its relatedness to chickens, not snow.
Gazzaniga argues that this kind of post hoc self-interpretation is actually a routine function of the left hemisphere and that we are largely oblivious to instances of such filling in. Thus, conscious experience may feel unitary because, even though the processing that actually determines feelings, actions and sensations is modular, consciousness is only aware of the net result and the Interpreter’s post hoc assessment of it.
Kurzban and Haidt extend this notion, arguing that the Interpreter is not merely responsible for proffering interpretations. They argue that it proffers strategically biased, even ignorant, interpretations, much like a public-relations department whose primary purpose is persuasion. We need to convince others of our value, our rectitude, and of others’ villainy. Self-deception aids in this mission. Evolution may have tailored our Interpreters to be prone to believe optimistic, upstanding appraisals of ourselves, such that we portray and disseminate the most advantageous defensible positions available.
Returning to hypocrisy, we care deeply about the actions of others and insist on punishing those who run afoul of our mores. When it comes to our own deeds, however, we often admit a greater degree of latitude. To account for this disparity, Kurzban divides morality, positing a conscience whose domain is the stewardship of one’s own behavior, and a judge whose domain is the enforcement of propriety in others. Each module generates urges and each competes with other modules—like sex drive, in the case of the conscience—for dominance of sentiment. Because modules operate competitively and because they contain blind spots, hypocrisy may come standard.
Who’s In Charge? reviews the neuroscience of free will—essential to accountability—and the implications of neuroscience for law. Gazzaniga’s approach to free will owes as much to physics as it does to neuroscience. The universe is made of particles. Quantum mechanics confines knowledge of these particles to the realm of probability. Nonetheless, when considering the behavior of collections of particles, classical and relativistic physics, which are deterministic, hold sway. Gazzaniga quotes Albert Einstein on his view of free will: “In human freedom, in the philosophical sense, I am definitely a disbeliever. Everyone acts not only under external compulsion but also in accordance with inner necessity.”
Reconciling determinism with common sense is Gazzaniga’s first problem. He solves it through recourse to emergence and caution about levels of description. Just as life is an emergent property of the interactions of certain kinds of molecules, free will, as with consciousness, may be an emergent property of information processing. Similarly, though we cannot conceive of a molecule as itself being independently alive (nor should we), we readily conceive of systems of molecules as living. The relation between life and inanimacy should teach us to be cautious: Do not look for properties that may be phenomena of systems in the parts of systems. Thus, he wisely warns, we may be misguided if we infer free will’s impossibility from the laws of classical mechanics.
The second problem Gazzaniga engages is what neuroscience can tell us about free will. Some data suggest that the conscious impression of making a decision is an illusion. Consider reactions to painful stimuli. As noted above, pain fibers become active when their nerve endings sense damage to the integrity of the organism. They relay this information to the spinal cord, which initiates a reflexive response, for example, withdrawing one’s hand from a hot pan. Simultaneously, information of the painful event is relayed from the spinal cord to the brain. Only in the brain is the sensation of pain consciously felt. Although the brain did not have any role in the decision to withdraw the arm, we subjectively perceive ourselves as having felt the pain and having decided to withdraw the arm in response to the pain. In fact, the reaction happened prior to the conscious feeling of pain. Here again, we see the hand of the Interpreter. It seems to assemble a best inference about what happened from what it presumes possible.
The misattribution of free will to reflexive behavior is perhaps not of great consequence. A more formidable challenge comes from one line of experiments that suggest certain neural events precede conscious reports of decision-making by, depending on the type of experiment, hundreds to thousands of milliseconds. The most recent of these experiments monitored brain activity with functional MRI scans while participants made simple decisions about when to push a button. While deliberating, a stream of letters was shown to participants so that they could key the moment of decision to a moment in time via the letter on screen at the time of decision. Scientists later were able to show that certain brain activity could be used to predict decisions and that this activity occurred as much as ten seconds prior to conscious reports of decision-making! This suggests that conscious volition may only be apparent, that non-conscious processing predetermines action and the subjective impression of free will is just post hocinterpretation. Gazzaniga believes these experiments are not easy to interpret. The timing of the neural events that predict behavior need not correspond to the timing of the conscious impression of choice for the choice to have been volitional. Essentially, if the conscious mental timeline is something of a reconstruction, it is difficult to make inferences from brain- and report-timing discrepancies.
With respect to neuroscience in the courtroom, Gazzaniga argues that the application of neuroscience is, with rare exception, highly premature. Functional MRI-based lie detectors are presently little different from polygraphs. Similarly, what we know about brain organization and normal activity is derived from group analyses. Even when a result is highly reproducible across samples, individual variability may be large. What to make of this is an open question. At the least, it makes brain scans inadequate as exculpatory evidence. Finally, with respect to free will and accountability, Gazzaniga concludes, “The issue isn’t whether or not we are ‘free.’ The issue is that there is no scientific reason not to hold people accountable and responsible.”
The neuroscience of morality is important and exciting, yet very much a nascent enterprise. These books appropriately reflect this, engaging thought in philosophy and the social sciences as often as they do thought in the biological sciences. The authors write in the lucid, unsentimental style of books like Steven Pinker’s How the Mind Works. Haidt and Churchland offer the freshest material. All are insightful and original. Through their prose, empiricism speaks volumes on humanity. Those inclined toward the sentimental in their appreciation of morality would do well by mastering the empirical.
1. Scientific American, April 5, 1884, p. 218.
2. For an account of the evolutionary origin of self-deception that does not rely on the logic of modularity, see Robert Trivers, The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life (Basic Books, 2012).
No comments:
Post a Comment