Tuesday, October 12, 2010

Responses to the Consensus Statement Reached Among Participants at the Edge The New Science of Morality Conference

Last month, the Edge The New Science of Morality Conference produced a consensus statement signed by several moral philosophers and scientists. Then they sought responses from other scientists - not all agree completely with the statement.

CONSENSUS STATEMENT

1) Morality is a natural phenomenon and a cultural phenomenon
Like language, sexuality, or music, morality emerges from the interaction of multiple psychological building blocks within each person, and from the interactions of many people within a society. These building blocks are the products of evolution, with natural selection playing a critical role. They are assembled into coherent moralities as individuals mature within a cultural context. The scientific study of morality therefore requires the combined efforts of the natural sciences, the social sciences, and the humanities.

2) Many of the psychological building blocks of morality are innate
The word "innate," as we use it in the context of moral cognition, does not mean immutable, operational at birth, or visible in every known culture. It means "organized in advance of experience," although experience can revise that organization to produce variation within and across cultures.

Many of the building blocks of morality can be found, in some form, in other primates, including sympathy, friendship, hierarchical relationships, and coalition-building. Many of the building blocks of morality are visible in all human culture, including sympathy, friendship, reciprocity, and the ability to represent others' beliefs and intentions.

Some of the building blocks of morality become operational quite early in childhood, such as the capacity to respond with empathy to human suffering, to act altruistically, and to punish those who harm others.

3) Moral judgments are often made intuitively, with little deliberation or conscious weighing of evidence and alternatives
Like judgments about the grammaticality of sentences, moral judgments are often experienced as occurring rapidly, effortlessly, and automatically. They occur even when a person cannot articulate reasons for them.

4) Conscious moral reasoning plays multiple roles in our moral lives
People often apply moral principles and engage in moral reasoning. For example, people use reasoning to detect moral inconsistencies in others and in themselves, or when moral intuitions conflict, or are absent. Moral reasoning often serves an argumentative function; it is often a preparation for social interaction and persuasion, rather than an open-minded search for the truth. In line with its persuasive function, moral reasoning can have important causal effects interpersonally. Reasons and arguments can establish new principles (e.g., racial equality, animal rights) and produce moral change in a society.

5) Moral judgments and values are often at odds with actual behavior
People often fail to live up to their consciously-endorsed values. One of the many reasons for the disconnect is that moral action often depends on self-control, which is a fluctuating and limited resource. Doing what is morally right, especially when contrary to selfish desires, often depends on an effortful inner struggle with an uncertain outcome.

6) Many areas of the brain are recruited for moral cognition, yet there is no "moral center" in the brain
Moral judgments depend on the operation of multiple neural systems that are distinct but that interact with one another, sometimes in a competitive fashion. Many of these systems play comparable roles in non-moral contexts. For example, there are systems that support the implementation of cognitive control, the representation of mental states, and the affective representation of value in both moral and non-moral contexts.

7) Morality varies across individuals and cultures
People within each culture vary in their moral judgments and behaviors. Some of this variation is due to heritable differences in temperament (for example, agreeableness or conscientiousness) or in morally-relevant capacities (such as one’s ability to take the perspective of others). Some of this difference is due to variations in childhood experiences; some is due to the roles and contexts influencing a person at the moment of judgment or action.

Morality varies across cultures in many ways, including the overall moral domain (what kinds of things get regulated), as well as specific moral norms, practices, values, and institutions. Moral virtues and values are strongly influenced by local and historical circumstances, such as the nature of economic activity, form of government, frequency of warfare, and strength of institutions for dispute resolution.

8) Moral systems support human flourishing, to varying degrees
The emergence of morality allowed much larger groups of people to live together and reap the benefits of trust, trade, shared security, long term planning, and a variety of other non-zero-sum interactions. Some moral systems do this better than others, and therefore it is possible to make some comparative judgments.

The existence of moral diversity as an empirical fact does not support an "anything-goes" version of moral relativism in which all moral systems must be judged to be equally good. We note, however, that moral evaluations across cultures must be made cautiously because there are multiple justifiable visions of flourishing and wellbeing, even within Western societies. Furthermore, because of the power of moral intuitions to influence reasoning, social scientists studying morality are at risk of being biased by their own culturally shaped values and desires.

Signed by:

Roy Baumeister, Florida State University
Paul Bloom, Yale University
Joshua Greene, Harvard University
Jonathan Haidt, University of Virginia
Sam Harris, Project Reason
Joshua Knobe, Yale University
David Pizarro, Cornell University

____

To view videos and transcripts of the conference, please visit:
http://www.edge.org/3rd_culture/morality10/morality10_index.html

Here are a couple of the responses - including one from Alison Gopnik, one of my favorite neuroscientists - her work with infants and young children has totally changed my understanding of the wee ones.

RANDOLPH NESSE, M.D.
Psychiatrist, University of Michigan; coauthor, Why We Get Sick

What a great idea to ask people who study morality what they agree on! Most of the consensus statements are nice straight descriptions. Morality is intuitive, emotional, not in any one part of the brain, both biological and cultural, and somewhat different in different individuals and cultures. These statements seem bland, but they are sophisticated. The overarching conclusion, that morality can and should be studied scientifically as a natural and social phenomenon, seems undeniably correct to me.

Many will, however, find it provocative. I'd like to hear what they have to say. They should not be dismissed as merely religious or insufficiently rational. It seems entirely possible that there are good reasons, perhaps even evolutionary reasons, for widespread reluctance to think about morality as a natural phenomenon. It seems possible that people feel intuitively that an objective view of morality could undermine moral behavior.

On other thought…missing from the summary, but no doubt present throughout the detailed transcripts, is the back story that led to progress in studying morality. The death this week of George Williams makes the advances of the 1960s particularly salient. It seems to me that the discovery that natural selection does not work for the good of the species is a psychic trauma as great as that imposed by Copernicus and Freud. It still reverberates, as evidenced by Nowak's article in Nature last week, and subsequent commentaries.

I have a special interest in this, because I think Mary Jane West-Eberhard's ideas about social selection explain how selection can shape genuine altruism with no need for group selection. Just as sexual selection shapes extreme traits that make one preferred as a mate, extreme altruistic traits can be shaped if they result in being preferred as a social partner or group member.

I find this important not only as a way of escaping the interminable debate about levels of selection, but also because it could explain some otherwise mysterious things I see in the clinic, especially our shared extreme concern about what others think about us. I think this will prove crucial to any explanation of how selection shaped capacities for morality.

* * * * *

ALISON GOPNIK
Professor of Psychology, University of California, Berkeley; Author, The Philosophical Baby

We can agree that there are some innate building blocks for moral intuition and judgment. But surely one of the most dramatic and obvious features of our moral capacities, evident everywhere from democracy to feminism to gay marriage, is our capacity for change and even radical transformation with new experiences. Practices that once seemed self-evidently acceptable, such as slavery or the oppression of women, seem so no longer.

At the same time this transformation isn't just random but seems to have a progressive quality. How do we get to new and better moral conceptions of the world, if the moral nativists are right? The consensus statement , at least implicitly, suggests that the solution to this problem lies in a contrast between moral intuition and moral reasoning.

The implication is that change and revision are only possible through explicit moral reasoning which operates on top of an innate evolved set of moral intuitions. But, arguably, even young children seem able to understand both that social practices can change and that that change should minimize harm. The situation is analogous to that of our ability to learn about the world.

There also we might have thought that only self-conscious, explicit scientific reasoning allowed for change and progress. In fact, the developmental evidence suggest that, in both the cognitive and moral domains, we are capable of changing, revising and altering what we do and believe in the light of experience, and this ability is as deeply evolved a part part of our human nature as any.

* * * * *

PETER DITTO
Professor of Psychology, University of California, Irvine

People have a history of confusing what is with what ought to be. Any list of general principles underlying moral judgment should include this observation.

I agree with virtually everything in this consensus statement. This is not surprising given the quality of the contributors. But any scientific treatment of moral judgment must be clear about its defining characteristics, boundaries, and the nature of its relations with other kinds of judgment that have interested psychologists and philosophers. And yet the distinction between descriptive and prescriptive judgment has always been more blurry, both conceptually and empirically, than it first appears.

The is-ought problem has a venerable history in moral philosophy. Most attention has been directed toward our tendency to assume that what is ought to be. The naturalistic fallacy — assuming that what is natural is also morally good — is a classic example. More recently, Psychologist Paul Rozin has talked about the process of moralization, in which some behavior that is originally viewed as bad or good for pragmatic reasons — smoking cigarettes or vegetarianism — takes on moral value over time. Specifying the psychological process by which descriptive beliefs and pragmatic pressures are converted into vice and virtue will be crucial in understanding both personal and cultural variations in moral evaluation.

But it is the opposite process — our tendency to come to believe that what ought to be really is — that may be the more interesting and important confusion. Moral intuitions often violate principles of utility in that some act is treated as sacred or protected in a way which conflicts with cost-benefit logic. If I refuse to betray one friend to save the lives of several others, my sense that disloyalty is morally wrong conflicts with well-rehearsed economic intuitions that an act of betrayal on my part would have produced a lower body count.

Such moral dilemmas fascinate psychologists and philosophers, and capture the psychological lynch pin underlying real world political controversies regarding issues like capital punishment, the use of enhanced interrogation techniques, and even embryonic stem cell research. But when was the last time you heard someone say that capital punishment is morally wrong while conceding that it is effective at deterring future crime? Years of psychological research confirm that mental systems abhor such dissonance. Instead, people who believe that enhanced interrogation is morally wrong also typically believe that it is ineffective in producing actionable intelligence, and those morally opposed to embryonic stem cell research almost always doubt its likelihood of producing medical breakthroughs. Our moral intuitions tend to shape our descriptive beliefs such that we construct a world in which the right course of action morally becomes the right course of action practically as well.

The fluid relation between descriptive and prescriptive belief is both an age-old observation and a theoretical and empirical challenge for modern morality researchers. It also has important practical implications. The intractability of the venomous culture war that plagues contemporary American politics flows from this tendency to reify moral intuition with factual belief. It is difficult to resolve differences of moral opinion when each side has its own facts.


No comments: