NYU hosted a 3-day conference - 2012 Bioethics Conference: The Moral Brain - organized by the NYU Center for Bioethics in collaboration with the Duke Kenan Institute for Ethics with generous support from the NYU Graduate School for Arts & Science and the NYU Humanities Initiative. The conference was held Friday, March 30, 2012 through Sunday, April 1, 2012 - they began posting the videos on May 1.
Here are some brief opening remarks followed by the first day's lectures - I will post more as they become available (Session II has not been posted yet).
Welcoming: The Significance of Neuroscience for Morality: Lessons from a Decade of Research
Thomas Carew, Dean of the Faculty of Arts & Science, New York University
It has been a decade since the first brain imaging studies of moral judgments by Joshua Greene, Jorge Moll and their colleagues were reported. During this time, there have been rich philosophical and scientific discussions regarding a) whether brain imaging data can tell us anything about moral judgments, and b) what they do tell us if they can tell us something about moral judgments. In this workshop, we aim to bring leading philosophers, neuroscientists, and psychologists in this area together to examine these issues and to explore the future directions of this research.
Session I: Beyond Point-And-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics
Session Chair: Joseph LeDoux, University Professor; Henry and Lucy Moses Professor of Science; Professor of Neural Science and Psychology, Center for Neural Science and Psychology, New York University
Speaker: Joshua Greene: John & Ruth Hazel Associate Professor of Social Sciences, Department of Psychology, Harvard University
Abstract: Does the "is" is of empirical moral psychology have implications for the "ought" of normative ethics? I'll argue that it does. One cannot deduce moral truths form scientific truths, but cognitive science, including cognitive neuroscience, may nevertheless influence moral thinking in profound ways. First, I'll review evidence for the dual-process theory of moral judgment, according to which characteristically deontological judgments tend to be driven by automatic emotional responses while characteristically consequentialist judgments tend to be driven by controlled cognitive processes. I'll then consider the respective functions of automatic and controlled processes. Automatic processes are like the point-and-shoot settings on a camera, efficient but inflexible. Controlled processes are like a camera's manual mode, inefficient but flexible. Putting these theses together, I'll argue that deontological philosophy is essentially a rationalization of automatic responses that are too inflexible to handle our peculiarly modern moral problems. I'll recommend consequentialist thinking as a better alternative for modern moral problem-solving.
Session III: When the Mind Matters for Morality
Session Chair: William Ruddick, Professor of Philosophy, New York University; Former Director of the Center for Bioethics.
Speaker: Liane Young, Assistant Professor of Psychology, Boston College
Abstract: Mental state reasoning is critical for moral cognition, allowing us to distinguish, for example, murder from manslaughter. I will present neural evidence for distinct cognitive components of mental state reasoning for moral judgment, and investigate differences in mental state reasoning for distinct moral domains, i.e. harm versus purity, for self versus other, and for groups versus individuals. I will discuss these findings in the context of the broader question of why the mind matters for morality.
Session IV: The Representation of Reinforcement Values in Care-Based Morality & the Implications of Dysfunction for the Development of Psychopathic Traits
Session Chair: Lila Davachi, Associate Professor of Psychology, New York University
Speaker: James Blair: Chief of the Unit on Affective Cognitive Neuroscience, National Institute of Mental Health, NIH
Abstract: This talk will concentrate on two brain areas critical for the development of care-based morality (social rules covering harm to others). The role of the amygdala in stimulus-reinforcement learning will be considered, particularly when the reinforcement is social (the fear, sadness and pain of others). The role of orbital frontal cortex in the representation of value, critical for decision making (both care-based moral and non-moral) will also be considered. Data showing dysfunction in both of these systems and these functions and their interaction in individuals with psychopathic traits will be presented and the implications of these data for care-based (and other forms of) morality will be considered.
Session V: Is There One Moral Brain
Session Chair: Don Garrett, Chair of Department and Professor of Philosophy, New York University
Speaker: Walter Sinnott-Armstrong: Chauncey Stillman Professor in Practical Ethics, Department of Philosophy & Kenan Institute for Ethics, Duke University
Abstract: Different kinds of moral dilemmas produce activity in different parts of the brain, so there is no single neural network behind all moral judgments. This paper will survey the growing evidence for this claim and its implications for philosophy and for method in moral neuroscience.
No comments:
Post a Comment