Joshua Knobe - Assistant Professor, Program in Cognitive Science and Department of Philosophy at Yale University
Imagine two people discussing a question in mathematics. One of them says “7,497 is a prime number,” while the other says, “7,497 is not a prime number.” In a case like this one, we would probably conclude that there is a single right answer and that anyone who says otherwise must be mistaken. The question under discussion here, we might say, is perfectly objective.
But now suppose we switch to a different topic. Two people are talking about food. One of them says “Don’t even think about eating caterpillars! They are totally disgusting and not tasty at all,” while the other says “Caterpillars are a special delicacy — one of the tastiest, most delectable foods a person can ever have occasion to eat.” In this second case, we might have a very different reaction. We might think that there isn’t any single right answer. Maybe caterpillars are just tasty for some people but not for others. This latter question, we might think, should be understood as relative.
Now that we’ve got at least a basic sense for these two categories, we can turn to a more controversial case. Suppose that the two people are talking about morality. One of them says “That action is deeply morally wrong,” while the other is speaking about the very same action and says “That action is completely fine — not the slightest thing to worry about.” In a case like this, one might wonder what reaction would be most appropriate. Should we say that there is a single right answer here, or should we say that different answers could be right for different people? In other words, should we say that morality is something objective or something relative?
This question lies at the center of a long and complex philosophical debate. The usual assumption is that ordinary people treat moral judgments as getting at something objective, but there is a lot of disagreement about how to make sense of this ordinary practice within a broader theory about the nature of morality. Is people’s ordinary practice fundamentally correct? Or is it founded on some sort of error? Or might there be some third possible view one could adopt here? The debate over these questions has been a wonderfully sophisticated one, filled with dazzling arguments, objections and replies.
There is just one snag. No real evidence is offered for the initial assumption that ordinary people treat moral claims as getting at something objective. Instead, the traditional approach is just to start out with the assumption that people look at morality in this way and then begin arguing from there.
With the growing interest in experimental philosophy and empirical moral psychology, there has been a surge of recent attempts to go after these questions in a more systematic way. Researchers have taken the conceptual insights developed in the existing philosophical literature and used these insights to generate controlled experimental studies. But a funny thing happened when people started taking these questions into the lab. Again and again, when researchers took up these questions experimentally, they did not end up confirming the traditional view. They did not find that people overwhelmingly favored objectivism. Instead, the results consistently point to a more complex picture. There seems to be a striking degree of conflict even in the intuitions of ordinary folks, with some people under some circumstances offering objectivist answers, while other people under other circumstances offer more relativist views.
For a nice example from recent research, consider a study by Adam Feltz and Edward Cokely. They were interested in the relationship between belief in moral relativism and the personality trait openness to experience. Accordingly, they conducted a study in which they measured both openness to experience and belief in moral relativism. To get at people’s degree of openness to experience, they used a standard measure designed by researchers in personality psychology. To get at people’s agreement with moral relativism, they told participants about two characters — John and Fred — who held opposite opinions about whether some given act was morally bad. Participants were then asked whether one of these two characters had to be wrong (the objectivist answer) or whether it could be that neither of them was wrong (the relativist answer). The result was a surprising one. It just wasn’t the case that participants overwhelmingly favored the objectivist answer. Instead, people’s answers were correlated with their personality traits. The higher a participant was in openness to experience, the more likely that participant was to give a relativist answer.
Geoffrey Goodwin and John Darley pursued a similar approach, this time looking at the relationship between people’s belief in moral relativism and their tendency to approach questions by considering a whole variety of possibilities. They proceeded by giving participants mathematical puzzles that could only be solved by looking at multiple different possibilities. Thus, participants who considered all these possibilities would tend to get these problems right, whereas those who failed to consider all the possibilities would tend to get the problems wrong. Now comes the surprising result: those participants who got these problems right were significantly more inclined to offer relativist answers than were those participants who got the problems wrong.
Taking a slightly different approach, Shaun Nichols and Tricia Folds-Bennett looked at how people’s moral conceptions develop as they grow older. Research in developmental psychology has shown that as children grow up, they develop different understandings of the physical world, of numbers, of other people’s minds. So what about morality? Do people have a different understanding of morality when they are twenty years old than they do when they are only four years old? What the results revealed was a systematic developmental difference. Young children show a strong preference for objectivism, but as they grow older, they become more inclined to adopt relativist views. In other words, there appears to be a developmental shift toward increasing relativism as children mature. (In an exciting new twist on this approach, James Beebe and David Sackris have shown that this pattern eventually reverses, with middle-aged people showing less inclination toward relativism than college students do.)
So there we have it. People are more inclined to be relativists when they are high in openness to experience, when they have an especially good ability to consider multiple possibilities, when they have matured past childhood (but not when they get to be middle-aged). Looking at these various effects, my collaborators and I thought that it might be possible to offer a single unifying account that explained them all. Specifically, our hypothesis was that people are drawn to relativism to the extent that they open their minds to alternative perspectives. There might be all sorts of different factors that lead people to open their minds in this way (personality traits, cognitive dispositions, age), but regardless of the instigating factor, researchers seem always to be finding the same basic effect. The more people have a capacity to truly engage with other perspectives, the more they seem to turn toward moral relativism.
To really put this hypothesis to the test, Hagop Sarkissian, Jennifer Wright, John Park, David Tien and I teamed up to run a series of new studies.
1 comment:
Pretty cool - makess me think of Robert Kegan.
Post a Comment