Showing posts with label legal system. Show all posts
Showing posts with label legal system. Show all posts

Tuesday, May 27, 2014

Who Goes To Jail? Matt Taibbi on American Injustice


Disturbing to hear all of this out loud, usually these thoughts about how unjust the Amerikan legal system is are only in my head. Matt Taibbi's new book is The Divide: American Injustice in the Age of the Wealth Gap (2014). Here is the publisher's blurb for the book:
Over the last two decades, America has been falling deeper and deeper into a statistical mystery:
 
Poverty goes up. Crime goes down. The prison population doubles. Fraud by the rich wipes out 40 percent of the world’s wealth. The rich get massively richer. No one goes to jail.

In search of a solution, journalist Matt Taibbi discovered the Divide, the seam in American life where our two most troubling trends—growing wealth inequality and mass incarceration—come together, driven by a dramatic shift in American citizenship: Our basic rights are now determined by our wealth or poverty. The Divide is what allows massively destructive fraud by the hyperwealthy to go unpunished, while turning poverty itself into a crime—but it’s impossible to see until you look at these two alarming trends side by side.

In The Divide, Matt Taibbi takes readers on a galvanizing journey through both sides of our new system of justice—the fun-house-mirror worlds of the untouchably wealthy and the criminalized poor. He uncovers the startling looting that preceded the financial collapse; a wild conspiracy of billionaire hedge fund managers to destroy a company through dirty tricks; and the story of a whistleblower who gets in the way of the largest banks in America, only to find herself in the crosshairs. On the other side of the Divide, Taibbi takes us to the front lines of the immigrant dragnet; into the newly punitive welfare system which treats its beneficiaries as thieves; and deep inside the stop-and-frisk world, where standing in front of your own home has become an arrestable offense. As he narrates these incredible stories, he draws out and analyzes their common source: a perverse new standard of justice, based on a radical, disturbing new vision of civil rights.

Through astonishing—and enraging—accounts of the high-stakes capers of the wealthy and nightmare stories of regular people caught in the Divide’s punishing logic, Taibbi lays bare one of the greatest challenges we face in contemporary American life: surviving a system that devours the lives of the poor, turns a blind eye to the destructive crimes of the wealthy, and implicates us all.
This video is courtesy of Democracy Now!

Who Goes To Jail? Matt Taibbi on American Injustice

Award-winning journalist Matt Taibbi is out with an explosive new book that asks why the vast majority of white-collar criminals have avoided prison since the financial crisis began, while an unequal justice system imprisons the poor and people of color on a mass scale. In The Divide: American Injustice in the Age of the Wealth Gap, Taibbi explores how the Depression-level income gap between the wealthy and the poor is mirrored by a "justice" gap in who is targeted for prosecution and imprisonment. "It is much more grotesque to consider the non-enforcement of white-collar criminals when you do consider how incredibly aggressive law enforcement is with regard to everybody else," Taibbi says.
Watch: All Democracy Now! interviews with Matt Taibbi
http://www.democracynow.org/appearanc...

Democracy Now!, is an independent global news hour that airs weekdays on 1,200+ TV and radio stations Monday through Friday. Watch our livestream 8-9am ET at http://www.democracynow.org.

Please consider supporting independent media by making a donation to Democracy Now! today, visit http://owl.li/ruJ5Q.

FOLLOW DEMOCRACY NOW! ONLINE:
Facebook: http://www.facebook.com/democracynow
Twitter: @democracynow
YouTube: http://www.youtube.com/democracynow
SoundCloud: http://www.soundcloud.com/democracynow
Daily Email: http://www.democracynow.org/subscribe
Google+: https://plus.google.com/+DemocracyNow
Instagram: http://instagram.com/democracynow
Tumblr: http://democracynow.tumblr
Pinterest: http://www.pinterest.com/democracynow

Saturday, February 22, 2014

Paul Bloom - The War on Reason

Paul Bloom is the Brooks and Suzanne Ragen Professor of Psychology and Cognitive Science at Yale University. His research explores how children and adults understand the physical and social world, with special focus on morality, religion, fiction, and art. He has published more than a hundred scientific articles in journals such as Science and Nature, and his popular writing has appeared in the New York Times, The New Yorker, The Atlantic Monthly, Slate, Natural History, and many other publications. He has won numerous awards for his research and teaching.

Bloom is the author of Just Babies: The Origins of Good and Evil (2013), as well as How Pleasure Works: The New Science of Why We Like What We Like (2011), Descartes' Baby: How the Science of Child Development Explains What Makes Us Human (2005), and several other books. A selection of popular articles and academic articles can be found here.

In this article for The Atlantic Monthly, Bloom argues against the neuro-reductionist nonsense coming from people like Sam Harris, David Eagleman, Jonathan Haidt (all named directly), and Patricia Churchland (not named).

As regular readers will know, I side with Bloom (and Michael Gazzaniga, and Evan Thompson, and many other non-reductionist neuroscientists).

The War on Reason

Scientists and philosophers argue that human beings are little more than puppets of their biochemistry. Here's why they're wrong.

Paul Bloom | Feb 19 2014


Matt Dorfman

ARISTOTLE'S DEFINITION OF MAN as a rational animal has recently taken quite a beating.

Part of the attack comes from neuroscience. Pretty, multicolored fMRI maps make clear that our mental lives can be observed in the activity of our neurons, and we’ve made considerable progress in reading someone’s thoughts by looking at those maps. It’s clear, too, that damage to the brain can impair the most-intimate aspects of ourselves, such as the capacity to make moral judgments or to inhibit bad actions. To some scholars, the neural basis of mental life suggests that rational deliberation and free choice are illusions. Because our thoughts and actions are the products of our brains, and because what our brains do is determined by the physical state of the world and the laws of physics—perhaps with a dash of quantum randomness in the mix—there seems to be no room for choice. As the author and neuroscientist Sam Harris has put it, we are “biochemical puppets.”

This conception of what it is to be a person fits poorly with our sense of how we live our everyday lives. It certainly feels as though we make choices, as though we’re responsible for our actions. The idea that we’re entirely physical beings also clashes with the age-old idea that body and mind are distinct. Even young children believe themselves and others to be not just physical bodies, subject to physical laws, but also separate conscious entities, unfettered from the material world. Most religious thought has been based on this kind of dualist worldview, as showcased by John Updike in Rabbit at Rest, when Rabbit talks to his friend Charlie about Charlie’s recent surgery:
“Pig valves.” Rabbit tries to hide his revulsion. “Was it terrible? They split your chest open and ran your blood through a machine?”

“Piece of cake. You’re knocked out cold. What’s wrong with running your blood through a machine? What else you think you are, champ?”

A God-made one-of-a-kind with an immortal soul breathed in. A vehicle of grace. A battlefield of good and evil. An apprentice angel …

“You’re just a soft machine,” Charlie maintains.
I bristle at that just, but the evidence is overwhelming that Charlie is right. We are soft machines—amazing machines, but machines nonetheless. Scientists have reached no consensus as to precisely how physical events give rise to conscious experience, but few doubt any longer that our minds and our brains are one and the same.

Another attack on rationality comes from social psychology. Hundreds of studies now show that factors we’re unaware of influence how we think and act. College students who fill out a questionnaire about their political opinions when standing next to a dispenser of hand sanitizer become, at least for a moment, more politically conservative than those standing next to an empty wall. Shoppers walking past a bakery are more likely than other shoppers to make change for a stranger. Subjects favor job applicants whose résumés are presented to them on heavy clipboards. Supposedly egalitarian white people who are under time pressure are more likely to misidentify a tool as a gun after being shown a photo of a black male face.

 
Illustration by Stephen Doyle

In a contemporary, and often unacknowledged, rebooting of Freud, many psychologists have concluded from such findings that unconscious associations and attitudes hold powerful sway over our lives—and that conscious choice is largely superfluous. “It is not clear,” the Baylor College neuroscientist David Eagleman writes, “how much the conscious you—as opposed to the genetic and neural you—gets to do any deciding at all.” The New York University psychologist Jonathan Haidt suggests we should reject the notion that we are in control of our decisions and instead think of the conscious self as a lawyer who, when called upon to defend the actions of a client, mainly provides after-the-fact justifications for decisions that have already been made.

Such statements have produced a powerful backlash. What they represent, many people feel, are efforts at a hostile takeover of the soul: an assault on religious belief, on traditional morality, and on common sense. Derisory terms like neurotrash, brain porn, and (for the British) neurobollocks are often thrown around. Some people, such as the novelist Marilynne Robinson and the writer and critic Leon Wieseltier, argue that science has inappropriately ventured outside its scope and has still failed to capture the rich and transcendent nature of human experience. The author and clinical neuroscientist Raymond Tallis worries that such theories suggest no meaningful gap separates man and beast, a position that he argues, in Aping Mankind, is “not merely intellectually derelict but dangerous.”

For the most part, I’m on the side of the neuroscientists and social psychologists—no surprise, given that I’m a psychologist myself. Work in fields such as computational cognitive science, behavioral genetics, and social neuroscience has yielded great insights about human nature. I do worry, though, that many of my colleagues have radically overstated the implications of their findings. The genetic you and the neural you aren’t alternatives to the conscious you. They are its foundations.

KNOWING THAT WE ARE physical beings doesn’t tell us much. The interesting question is what sort of physical beings we are.

Nobody can deny that we are sometimes biochemical puppets. In 2000, an otherwise normal Virginia man started to collect child pornography and make sexual advances toward his prepubescent stepdaughter. He was sentenced to spend time in a rehabilitation center, only to be expelled for making lewd advances toward staff members and patients. The next step was prison, but the night before he was to be incarcerated, severe headaches sent him to the hospital, where doctors discovered a large tumor on his brain. After they removed it, his sexual obsessions disappeared. Months later, his interest in child pornography returned, and a scan showed that the tumor had come back. Once again it was removed, and once again his obsessions disappeared.

Other examples of biochemical puppetry abound. A pill used to treat Parkinson’s disease can lead to pathological gambling; date-rape drugs can induce a robot-like compliance; sleeping pills can lead to sleep-binging and sleep-driving. These cases—some of which are discussed in detail by David Eagleman in Incognito: The Secret Lives of the Brain (excerpted in the July/August 2011 Atlantic)—intrigue and trouble us because they involve significant actions that are disengaged from the normal mechanisms of conscious deliberation. When the victims are brought back to normal—the drug wears off; the tumor is removed—they feel sincerely that their desires and actions under the influence were alien to them, and fell outside the scope of their will.

For Eagleman, these examples highlight the need for a legal framework and criminal-justice system that can take into account our growing understanding of brain science. What we need, he argues, is “a shift from blame to biology.” This is reasonable enough. It’s hardly neurobollocks to think we should take the existence of a tumor into account when determining criminal responsibility for a sex offense.

But some cases raise thorny questions. Philosophers—and judges and juries—might disagree, for instance, as to whether an adult’s having been horrifically abused as a child can be considered as exculpatory as having a tumor. If the abuse visibly changed a person’s brain and stripped it of its full capacity for deliberation, should that count as a mitigating condition in court? What about individuals, such as certain psychopaths, who appear incapable of empathy and compassion? Should that diminish their responsibility for cruel actions?

Other cases are easier. It’s not hard to see the psychological distinction between the cold-blooded planning of a Mafia hit man and the bizarre actions of a paranoid schizophrenic. As you read this article, your actions are determined by physical law, but unless you have been drugged, or have a gun to your head, or are acting under the influence of a behavior-changing brain tumor, reading it is what you have chosen to do. You have reasons for that choice, and you can decide to stop reading if you want. If you should be doing something else right now—picking up a child at school, say, or standing watch at a security post—your decision to continue reading is something you are morally responsible for.

Some determinists would balk at this. The idea of “choosing” to stop (or choosing anything at all), they suggest, implies a mystical capacity to transcend the physical world. Many people think about choice in terms of this mystical capacity, and I agree with the determinists that they’re wrong. But instead of giving up on the notion of choice, we can clarify it. The deterministic nature of the universe is fully compatible with the existence of conscious deliberation and rational thought—with neural systems that analyze different options, construct logical chains of argument, reason through examples and analogies, and respond to the anticipated consequences of actions, including moral consequences. These processes are at the core of what it means to say that people make choices, and in this regard, the notion that we are responsible for our fates remains intact.

BUT THIS IS WHERE philosophy ends and psychology begins. It might be possible that we are physical beings who can use reason and make choices. But haven’t the psychologists shown us that this is wrong, that reason is an illusion? The sorts of findings I began this article with—about the surprising relationship between bakery smells and altruism, or between the weight of a résumé and how a job candidate is judged—are often taken to show that our everyday thoughts and actions are not subject to conscious control.

This body of research has generated a lot of controversy, and for good reason: some of the findings are fragile, have been enhanced by repeated testing and opportunistic statistical analyses, and are not easily replicated. But some studies have demonstrated robust and statistically significant relationships. Statistically significant, however, doesn’t mean actually significant. Just because something has an effect in a controlled situation doesn’t mean that it’s important in real life. Your impression of a résumé might be subtly affected by its being presented to you on a heavy clipboard, and this tells us something about how we draw inferences from physical experience when making social evaluations. Very interesting stuff. But this doesn’t imply that your real-world judgments of job candidates have much to do with what you’re holding when you make those judgments. What will probably matter much more are such boringly relevant considerations as the candidate’s experience and qualifications.

 
Illustration by Topos Graphics

Sometimes small influences can be important, and sometimes studies really are worth their press releases. It’s relevant that people whose polling places are schools are more likely to vote for sales taxes that will fund education. Or that judges become more likely to deny parole the longer they go without a break. Or that people serve themselves more food when using a large plate. Such effects, even when they’re small, can make a practical difference, especially when they influence votes and justice and health. But their existence doesn’t undermine the idea of a rational and deliberative self. To think otherwise would be like concluding that because salt adds flavor to food, nothing else does.

The same goes for stereotyping. Hundreds of studies have found that individuals, including those who explicitly identify themselves as egalitarian, make assumptions about people based on whether they are men or women, black or white, Asian or Jewish. Such assumptions have real-world consequences. They help determine how employers judge job applications; they motivate young children to interact with some individuals and not others; they influence police officers as they decide whether or not to shoot somebody. These are important findings. But as the Rutgers psychologist Lee Jussim points out in his recent book, Social Perception and Social Reality, these studies don’t mean what many people think they do.

For one thing, we apply stereotypes in a limited way, mainly when judging strangers. When we know someone, we’re far more influenced by facts about that individual than about the categories he or she belongs to. To a striking degree, too, we know what our stereotypes are. Ask people about their stereotypes of gay men, the elderly, or lawyers, say, and what they’ll tell you is likely to align pretty well with what social psychologists have found in their studies of unconscious bias. Furthermore, many stereotypes are accurate. To take one of the most obvious examples: men really are more prone to violence and sexual assault than women are. If you need to quickly judge the threat posed by a stranger standing at the corner of the street you’re about to walk down at night, you’ll probably fall back on this stereotype, consciously and unconsciously. And you’ll be right to do so.

None of this is to defend stereotyping. Strong moral arguments exist for why we should often try to ignore stereotypes or override them. But we shouldn’t assume they represent some irrational quirk of the unconscious mind. In fact, they’re largely the consequence of the mind’s attempt to make a rational decision.

A more general problem with the conclusions that people draw from the social-psychological research has to do with which studies get done, which papers get published, and which findings get known. Everybody loves nonintuitive findings, so researchers are motivated to explore the strange and nonrational ways in which the mind works. It’s striking to discover that when assigning punishment to criminals, people are influenced by factors they consciously believe to be irrelevant, such as how the attractive criminals are, and the color of their skin. This finding will get published in the top journals, and might make its way into the Science section of The New York Times. But nobody will care if you discover that people’s feelings about punishments are influenced by the severity of the crimes or the criminals’ past record. This is just common sense.

Whether this bias in what people find interesting is reasonable is a topic for another day. What’s important to remember is that some scholars and journalists fall into the trap of thinking that what they see in journals provides a representative picture of how we think and act.

OUR CAPACITY for rational thought emerges in the most-fundamental aspects of life. When you’re thirsty, you don’t just squirm in your seat at the mercy of unconscious impulses and environmental inputs. You make a plan and execute it. You get up, find a glass, walk to the sink, turn on the tap. These aren’t acts of genius, you haven’t discovered the Higgs boson, but still, this sort of mundane planning is beyond the capacity of any computer, which is why we don’t yet have robot servants. Making it through a single day requires the formulation and initiation of complex multistage plans, in a world that’s unforgiving of mistakes (try driving your car on an empty tank, or going to work without pants). The broader project of holding together relationships and managing a job or career requires extraordinary cognitive skills.

If you doubt the power of reason, consider the lives of those who have less of it. We take care of the intellectually disabled and brain-damaged because they cannot take care of themselves; we don’t let toddlers cook hot meals; and we don’t allow drunk people to drive cars or pilot planes. Like many other countries, the United States has age restrictions for driving, military service, voting, and drinking, and even higher age restrictions for becoming president, all under the assumption that certain core capacities, like wisdom and self-control, take time to mature.

Many commentators believe that we overemphasize reason’s importance. Social psychology, David Brooks writes in The Social Animal, “reminds us of the relative importance of emotion over pure reason, social connections over individual choice, character over IQ.” Malcolm Gladwell, for his part, argues in Outliers for the irrelevance of a high IQ. “If I had magical powers,” he says, “and offered to raise your IQ by 30 points, you’d say yes—right?” But then he goes on to say that you shouldn’t bother, because after you pass a certain basic threshold, IQ really doesn’t make any difference.

Brooks and Gladwell are both interested in the determinants of success. Brooks focuses on emotional and social skills, and Gladwell on the role of contingent factors, such as who your family is and where and when you were born. Both are right in assuming these factors to be significant, and Gladwell is probably correct that IQ, like other human traits, follows the law of diminishing returns. But both are wrong to doubt the central importance of intelligence. Indeed, intelligence, as measured by an IQ test, is correlated with all sorts of good things, such as steady job performance, staying out of prison, and being in a stable and fulfilling relationship. One might object that IQ is meaningful only because our society is obsessed with it. In the United States, after all, getting into a good university depends to a large extent on how well you do on the SAT, which is basically an IQ test. (The correlation between a person’s score on the SAT and on the standard IQ test is very high.) If we gave out slots at top universities to candidates with red hair, we would quickly live in a world in which being a redhead correlated with high income, elevated status, and other positive outcomes.

Still, the relationship between IQ and success is hardly arbitrary, and it’s no accident that universities take such tests so seriously. They reveal abilities such as mental speed and the capacity for abstract thought, and it’s not hard to see how these abilities aid intellectual pursuits. Indeed, high intelligence is not only related to success; it’s also related to kindness. Highly intelligent people commit fewer violent crimes (holding other things, such as income, constant) and are more cooperative, perhaps because intelligence allows one to appreciate the benefits of long-term coordination and to consider the perspectives of others.

Then there’s self-control. This can be seen as the purest embodiment of rationality, in that it reflects the working of a brain system (embedded in the frontal lobe, the part of the brain that lies behind the forehead) that restrains our impulsive, irrational, or emotive desires. In classic studies of self-control that he conducted in the 1960s, Walter Mischel investigated whether children could refrain from eating one marshmallow now to get two later. What he found was that the kids who waited for two marshmallows did better in school and on their SATs as adolescents, and ended up with better self-esteem, mental health, relationship quality, and income as adults. In his recent book, The Better Angels of Our Nature, Steven Pinker notes that a high level of self-control benefits not just individuals but also society. Europe, he writes, witnessed a thirtyfold drop in its homicide rate between the medieval and modern periods, and this, he argues, had much to do with the change from a culture of honor to a culture of dignity, which prizes restraint.

WHAT ABOUT THE capacity for moral judgment? In much of social psychology, morality is seen as the paradigm case of insidious irrationality. Whatever role our intellect might play in other domains, it seems largely irrelevant when it comes to our sense of right and wrong. Many people will tell you that flag burning, the eating of a deceased pet, and consensual sex between adult siblings are wrong, but when pressed to explain why, they suffer what Jonathan Haidt has described as “moral dumbfounding.” They flail around trying to find reasons, which suggests it’s not the reasons themselves that guided their judgments, but their gut intuition.

But as I argue in my book Just Babies, the existence of moral dumbfounding is less damning that it might seem. It is not the rule. People are not at a loss when asked why drunk driving is wrong, or why a company shouldn’t pay a woman less than a man for the same job, or why you should hold the door open for someone on crutches. We can easily justify these views by referring to fundamental concerns about harm, equity, and kindness. Moreover, when faced with difficult problems, we think about them—we mull, deliberate, argue. I’m thinking here not so much about grand questions such as abortion, capital punishment, just war, and so on, but rather about the problems of everyday life. Is it right to cross a picket line? Should I give money to the homeless man in front of the bookstore? Was it appropriate for our friend to start dating so soon after her husband died? What do I do about the colleague who is apparently not intending to pay me back the money she owes me?

Such rumination matters. If our moral attitudes are entirely the result of nonrational factors, such as gut feelings and the absorption of cultural norms, they should either be stable or randomly drift over time, like skirt lengths or the widths of ties. They shouldn’t show systematic change over human history. But they do. As the Princeton philosopher Peter Singer has put it, the moral circle has expanded: our attitudes about the rights of women, homosexuals, and racial minorities have all shifted toward inclusiveness.

Regardless of whether or not one views this as moral progress (some nihilists and cultural relativists think there is no such thing), it does suggest a cumulative evolution. People come to moral conclusions, often through debate and consultation with others, and these conclusions form the foundation for further progress. Just as modern evolutionary theory builds on the work of Darwin, our moral understanding builds on the moral discoveries of others, such as the wrongness of slavery and sexism.

WE'RE AT OUR WORST when it comes to politics. This helps explain why recent attacks on rationality have captured the imagination of the scientific community and the public at large. Politics forces us to confront those who disagree with us, and we’re not naturally inclined to see those on the other side of an issue as rational beings. Why, for instance, do so many Republicans think Obama’s health-care plan violates the Constitution? Writing in The New Yorker in June 2012, Ezra Klein used the research of Haidt and others to argue that Republicans despise the plan on political, not rational, grounds. Initially, he notes, they objected to what the Democrats had to offer out of a kind of tribal sense of loyalty. Only once they had established that position did they turn to reason to try to justify their views.

But notice that Klein doesn’t reach for a social-psychology journal when articulating why he and his Democratic allies are so confident that Obamacare is constitutional. He’s not inclined to understand his own perspective as the product of reflexive loyalty to the ideology of his own group. This lack of interest in the source of one’s views is typical. Because most academics are politically left of center, they generally use their theories of irrationality to explain the beliefs of the politically right of center. They like to explore how psychological biases shape the decisions people make to support Republicans, reject affirmative-action policies, and disapprove of homosexuality. But they don’t spend much time investigating how such biases might shape their own decisions to support Democrats, endorse affirmative action, and approve of gay marriage.

None of this is to say that Klein is mistaken. Irrational processes do exist, and they can ground political and moral decisions; sometimes the right explanation is groupthink or cognitive dissonance or prejudice. Irrationality is unlikely to be perfectly proportioned across political parties, and it’s possible, as the journalist Chris Mooney and others have suggested, that the part of the population that chose Obama in the most recent presidential election is more reasonable than the almost equal part that chose Romney.

But even if this were so, it would tell us little about the human condition. Most of us know nothing about constitutional law, so it’s hardly surprising that we take sides in the Obamacare debate the way we root for the Red Sox or the Yankees. Loyalty to the team is what matters. A set of experiments run by the Stanford psychologist Geoffrey Cohen illustrates this principle perfectly. Subjects were told about a proposed welfare program, which was described as being endorsed by either Republicans or Democrats, and were asked whether they approved of it. Some subjects were told about an extremely generous program, others about an extremely stingy program, but this made little difference. What mattered was party: Democrats approved of the Democratic program, and Republicans, the Republican program. When asked to justify their decision, however, participants insisted that party considerations were irrelevant; they felt they were responding to the program’s objective merits. This appears to be the norm. The Brown psychologist Steven Sloman and his colleagues have found that when people are called upon to justify their political positions, even those that they feel strongly about, many are unable to point to specifics. For instance, many people who claim to believe deeply in cap and trade or a flat tax have little idea what these policies actually mean.

So, yes, if you want to see people at their worst, press them on the details of those complex political issues that correspond to political identity and that cleave the country almost perfectly in half. But if this sort of irrational dogmatism reflected how our minds generally work, we wouldn’t even make it out of bed each morning. Such scattered and selected instances of irrationality shouldn’t cloud our view of the rational foundations of our everyday life. That would be like saying the most interesting thing about medicine isn’t the discovery of antibiotics and anesthesia, or the construction of large-scale programs for the distribution of health care, but the fact that people sometimes forget to take their pills.

Reason underlies much of what matters in the world, including the uniquely human project of reshaping our environment to achieve higher goals. Consider again our racial and gender stereotypes. Many people believe that circumstances exist in which it is wrong to use these stereotypes when making judgments. If we are worried about this, we can act. We can use reason to invent procedures that undermine our explicit and implicit biases. Blind reviewing and blind auditions block judges from using stereotypes, even unconsciously, by shielding them from information about candidates’ race or sex or anything else other than the merits of what one is supposed to be judging. Quota systems and diversity requirements take the opposite tack, and are rooted in different intuitions about the morally right thing to do; they enforce representation by minority groups, thereby taking the decision out of the hands of individuals with their own preferences and agendas and biases.

This is how moral progress happens. We don’t become better merely through good intentions and force of will, just as we don’t usually lose weight or give up smoking merely by wanting to. We use our intelligence. We establish laws, create social institutions, write constitutions, and evolve customs. We manage information and constrain options, allowing our better selves to overcome those gut feelings and appetites that we believe we would be better off without. Yes, we are physical beings, and yes, we are continually swayed by factors beyond our control. But as Aristotle recognized long ago, what’s so interesting about us is our capacity for reason, which reigns over all. If you miss this, you miss almost everything that matters.

~ Paul Bloom is a professor of psychology and cognitive science at Yale University, and the author of Just Babies: The Origins of Good and Evil (2013).

Wednesday, February 19, 2014

What Is Brain Death? (Excellent Explainer)

From Christian Jarrett at Wired, this is a nice explainer on what brain death is and how we can or cannot identify it. As everyone will remember from the Terri Shiavo situation in 2005, the ethics and the emotion around all of this are intense.

What Is Brain Death?


By Christian Jarrett
02.10.14


Image: Flickr / Opensource.com

Brain death is a tragic topic where neuroscience, ethics and philosophy collide. Two recent cases have sent this sensitive and thorny issue once again into the media spotlight.

Last November, 14 weeks into her pregnancy, 33-year-old Marlise Munoz collapsed at home from a suspected pulmonary embolism. The next day doctors declared that she was brain dead. However, against her own and her family’s wishes, John Peter Smith Hospital in Fort Worth Texas chose to maintain Munoz’s body on ventilators because they said they had a legal duty of care to her unborn fetus. On Sunday, January 26, following a successful lawsuit brought by her family, the hospital finally turned off the ventilators.

Meanwhile, teenager Jahi McMath was declared brain dead last December following complications that ensued after a tonsillectomy. In this case, McMath’s hospital wanted to turn off McMath’s artificial life support, but her family resisted this move and she has been transferred to another facility where her body is being maintained by mechanical respirator.

These contrasting cases provide a glimpse into the tragedy and ethical sensitivities surrounding the issue of brain death. Before we go any further, what are your first reactions to the stories? Do you believe that Marlise Munoz was dead after doctors declared her brain dead? What about Jahi McMath?

According to accepted medical and legal criteria, both Munoz and McMath were officially dead from the moment of brain death. The Uniform Determination of Death Act (UDDA) drafted in 1981 is accepted by all 50 US States. It determines that a person is dead if either their cardiovascular functioning has ceased or their brain has irreversibly stopped functioning. The precise methods and criteria for determining brain death vary from hospital to hospital, but the American Academy of Neurology states that three criteria must be fulfilled to confirm the diagnosis: “coma (with a known cause), absence of brainstem reflexes, and apnea [the cessation of breathing without artificial support].” In practice, clinicians will also look for an absence of motor responses (movement) and will rule out any other possible explanations for loss of brain function, such as drugs or hypothermia. Assessment will also be repeated again after several hours. For more details, the NHS website has a description of the diagnostic tests used for brain death in the UK.

The UDDA concept of brain death has its roots in a 1968 definition composed by medics and scholars at Harvard Medical School that outlines how death can be defined in terms of irreversible coma. Steven Laureys (of the Coma Science Group at Liège University Hospital) explains that earlier than that, a pair of French neurologists in 1959 also used the term “coma dépassé” (irretrievable coma) to refer to the same concept.

In contrast to the unequivocal contemporary official medical and legal position on brain death, surveys show widespread misunderstanding among the US public about what the term means. In 2003, in a survey of 1,000 households, James DuBois and T. Schmidt found that 47 percent agreed wrongly that “a person who is declared brain dead by a physician is still alive according to the law.” In 2004, a survey of 1,351 residents of Ohio found that 28 percent believed that brain dead people can hear. Yet another study, from 2003, found that only 16 percent of 403 surveyed families equated brain death with death.

This confusion is reflected in recent media coverage of the cases of Munoz and McMath. On January 26, reporting on the case of Marlise Munoz, the BBC stated: “A brain dead woman kept alive by a hospital in Texas because she was pregnant has been taken off life support [emphasis added].” In fact Munoz was not “kept alive” by the hospital – she was legally dead the moment that doctors determined that she was brain dead. Or consider an essay in American Thinker published on January 28: “Jahi McMath is alive [emphasis added]” declares its headline. And finally, from just a few days ago in Hollywood Life: “Brain dead woman to be kept alive until baby’s birth [emphasis added].”

These deviations from accepted medical understanding are not new or unusual. In an article published last year, Ariane Daoust and Eric Racine surveyed media coverage of brain death in US and Canadian newspapers between 2005 and 2009. They found few accurate definitions of brain death, together with many contradictory and colloquial uses of the term. Not only is “brain dead” used as a slang derogatory term for stupid politicians and celebrities, it’s also used erroneously to refer to people in a persistent vegetative state (PVS is characterised by a complete lack of awareness, but unlike brain death, this is sometimes potentially reversible, and some brain activity remains including brainstem function; Terri Schiavo was diagnosed as being PVS). Daoust and Racine also cited examples of news reports that implied a person could die a second time – once from brain death, and then a second death after life support is removed. For example, this is from The New York Times in 2005: “That evening Mrs. Cregan was declared brain-dead. The family had her respirator disconnected the next morning, and she died almost immediately.”

Surveys show that even medical professionals often lack understanding of the concept. In 2012, for example, a Spanish survey of graduating medical students found that only two-thirds believed that brain death is the same as death. Longer ago, in 1989, Youngner et al surveyed 195 US physicians and nurses and found that only 38 percent correctly understood the legal and medical criteria for death. In an overview of surveys of the public and medical personnel, James DuBois and colleagues in 2006 concluded that “studies consistently show that the general public and some medical personnel are inadequately familiar with the legal and medical status of brain death.”

Perhaps the most alarming example of misunderstanding of brain death by a medical professional comes from a 2007 paper by Professor of Medical Ethics Robert Truog (pdf). He describes the time that Dr. Sanjay Gupta (a neurosurgeon and Senior Medical Correspondent for CNN) appeared on Larry King in 2005 to discuss the tragic case of Susan Torres, another pregnant woman declared brain dead. “Well, you know, a dead person really means that the heart is no longer beating,” Gupta said. “I mean, that’s going to be the strict definition of it […] people do draw a distinction between brain dead and dead.” Here, in front of a massive mainstream audience, Dr. Gupta profoundly misrepresented the medical and legal facts around the criteria for death.

It is easy to understand why there is so much confusion. Many people implicitly associate life with breathing and heart function, and to see a person breathing (albeit with artificial support) and to be told they are in fact dead can be difficult to comprehend. The ability after brain death to carry a fetus, for wounds to heal, and for sexual maturation to occur also adds to many people’s incomprehension at the notion that brain dead means dead. But for those more persuaded by the idea of death as irrevocably linked, not with brain function, but with the end of heart and lung activity, consider this unpleasant thought experiment (borrowed from LiPuma and DeMarco). If a decapitated person’s body could be maintained on life support – with beating heart and circulating, oxygenated blood – would that person still be “alive” without their brain? And consider the converse – the classic “brain in a vat”. Would a conscious, thinking brain, sustained this way, though it had no breath and no beating heart, be considered dead? Surely not. Such unpalatable thought experiments demonstrate how brain death can actually be a more compelling marker of end of life than any perspective that focuses solely on bodily function.

Let’s be clear – there is continuing expert and public debate and controversy around how to define death, including brain death (to give you a taster, scholarly articles published over the last decade include “The death of whole-brain death” and “The incoherence of determining death by neurological criteria“). It is right that this debate and discussion continues. However, it’s also important that the public understand the existing consensus that is founded on the latest medical evidence and deliberation – that brain death means death. It’s not a preliminary or unfinished form of death. It’s not a persistent vegetative state. It is final. It is death. Families and medical professionals caring for brain dead patients are involved in terribly difficult decisions about organ donation and it is especially crucial that they know what the current medical and legal consensus is, and that they understand brain death means a permanent end of the person’s mental processing and consciousness, and therefore the end of life. Unsurprisingly, surveys show that people’s decisions about organ donation are affected by their understanding of what brain death means – people who think that brain death isn’t equivalent to death are less likely to agree to donation.

Of course, some people will have personal, spiritual or religious beliefs that contradict the current medical and legal position on brain death (such is the case with McMath’s family), and respect and sensitivity is important in these cases. Note, however, that both mainstream Judaism and Islam have accepted the concept of brain death. And, according to Steven Laureys writing in 2005, the Catholic Church has also stated that “the moment of death is not a matter for the church to resolve.”

I hope I have presented a fair and clear explanation of the current US medical and legal consensus on brain death. This is a tragic and sensitive issue and my heart goes out to the families of Munoz and McMath and others in similar situations.

Homepage image: Joachim Böttger via Ars Electronica/Flickr


Christian Jarrett is a cognitive neuroscientist turned science writer. He’s editor of The British Psychological Society’s Research Digest blog, staff writer on their magazine The Psychologist, and a columnist for 99U. He’s also author of The Rough Guide to Psychology, editor of 30-Second Psychology, and co-author of This Book Has Issues. His next book due in 2014 is Great Myths of the Brain.

Read more by Christian Jarrett

Follow @Psych_Writer on Twitter.

Saturday, June 15, 2013

Is DNA Collection the New Fingerprinting?

On Monday, June 3rd, the Supreme Court ruled that it is permissible to collect DNA sample from suspects who are under arrest. In their 5-4 ruling, the Justices decided that swabbing a person’s cheek (primary method of DNA collection) prior to the conviction does not constitute an unreasonable search. The only qualifiers given were that the person is under arrest “for a serious offense” and had been brought “to the station to be detained in custody.”

So what determines a "serious offense"?

I can see this ruling being misused in a multitude of ways, not least of which is arresting suspects as a "fishing expedition" to charge them with previous crimes or suspected crimes.

Once the DNA is collected, where does it go, who takes possession of it? Does it get entered into the national database? Does it get destroyed if the person is innocent? There a lot of issues with this ruling, and this article from Pacific Standard looks at the slippery slope this ruling entails.

DNA Collection Is the New Fingerprinting

What will it mean for crime suspects—and for victims?


June 3, 2013 • By Lauren Kirchner


(ILLUSTRATION: JEZPER/SHUTTERSTOCK) 

On Monday, the Supreme Court gave the OK to the controversial practice of cops collecting DNA samples from crime suspects under arrest. In a 5-4 ruling, the justices decided that swabbing a person’s cheek prior to their conviction of any crime did not constitute an unreasonable search—so long as the suspect was under arrest “for a serious offense” and had been brought “to the station to be detained in custody.”

According to NBC News, 28 states and the federal government already adhere to this practice. This case dates back to the 2009 arrest of 26-year-old Alonzo King on assault charges. Maryland police swabbed his cheek after his arrest, and by running it through a DNA database, matched him to an unsolved rape case.

The slippery-slope argument here is a fitting one, of course. If cops can collect DNA without a conviction, without a warrant, then how soon will it be until they can collect it from anyone during routine traffic stops, or any time? Or until other institutions besides law enforcement can? Justice Scalia, writing in his dissent on Monday, addressed those concerns.

“Today’s judgment will, to be sure, have the beneficial effect of solving more crimes,” he wrote. “Then again, so would the taking of DNA samples from anyone who flies on an airplane.”

Justices voting in the majority compared DNA collection to a more advanced version of fingerprinting. In his oral argument back in February, Judge Alito stressed the significance of this new technology, which has the potential to solve countless murders and rapes with “a very minimal intrusion on personal privacy.”

Is the DNA-fingerprint comparison an accurate one? In an age when an artist can pick up an old piece of chewing gum from the sidewalk and create a 3-D model of the gum-chewer’s face, it sounds a bit naïve.

Monday’s Supreme Court ruling is only one of many difficult cases that will arise, here and elsewhere, surrounding DNA sampling and sequencing technology. High-publicity instances of new DNA evidence freeing a wrongly-convicted prisoner may increase public support for DNA collection by law enforcement. At the same time, DNA-sequencing companies like 23AndMe and EasyDNA entering the mainstream may also make people feel more comfortable with the idea that something as private and complex as their genetic makeup can be mined for benefits both personal and societal. Canadian law enforcement officials are lobbying their government on the same issue now. In the U.K., a police commissioner is defending the right of cops to take samples from children under the age of 18 who are suspected of even minor offenses.

But what about non-criminal DNA databases? Privacy protection concerns should apply to victims of crime just as much, if not more, than they do to crime perpetrators and suspects. An article in Trends in Genetics out last month addressed the very tricky balance between identifying victims and protecting those victims’ privacy when DNA-collection is involved in the process.

According to the report’s authors, Joyce Kim and Sara H. Katsanis of Duke University, government agencies are increasingly using DNA databases specifically to identify victims of human trafficking and other human-rights violations. For instance, they write, “Routine, systematic databasing of family member profiles of missing persons” may help identify kidnapping or murder victims. Databases could also prevent children from being placed up for illegal adoptions. If there is ever a proper use for DNA in law enforcement, the authors argue, this is it—but there must be boundaries set, and soon.

“Scholars estimate that, globally, government-operated DNA databases will grow from approximately 30 million profiles in 2011 to 100 million profiles in 2015,” according to the report. Many of the existing collection programs, Katsanis and Kim note, “involve vulnerable populations, including children, sex workers, and persons whose legal or resident status may be questioned.”

The coordination and ownership of these databases is also at issue. “Government-held DNA databases can be readily monitored for quality and security, but less-secure private entities, such as NGOs or entities with diplomatic immunity, could minimize abuse of power,” the authors write. And the more centralized and internationally-accessible the databases get, the more security issues and legal complications will potentially arise.

Even outside of the law-enforcement and crime-prevention realms, the ownership of genetic information is an ongoing debate. California legislators are currently considering a new law to require genetic-testing firms like 23AndMe and EasyDNA to obtain a person’s permission before processing their information and putting it in their genetic database. Currently, it is perfectly legal to send someone else’s “genetic material” to one of these companies, for instance, for paternity information or, as one company puts it, “infidelity testing.” From the San Jose Mercury News: “‘We have privacy laws in place to protect health and financial information,’ said the bill’s author, Alex Padilla, D-Pacoima. ‘But arguably the most personal information about us—our own genetic profile—isn’t protected.’”

What’s more, a recent MIT study showed how easy it is for genetic databases to be hacked, making genome theft an actual, and frightening, possibility. “By means of your DNA, nature provides you with a security flaw that makes Microsoft Windows look like Fort Knox,” writes Michael White elsewhere on Pacific Standard today.

Having a not-quite-accurate 3-D model made of your face is one thing; putting detailed medical information in the hands of hackable Internet sites is quite another. Clearly, the security of these DNA databases should be just as pressing an issue as the collection of people’s DNA in the first place, criminals or no.

Thursday, May 30, 2013

Adrian Raine - The Criminal Mind

A month or so ago in the Wall Street Journal, Adrian Raine wrote about the emerging confluence of neuroscience and genetics with the legal system and our ideas of justice. Some forms of violent behavior and criminality have neural correlates in the brain.

I suspect this is not much more than valuable science that is unlikely to change the legal system. Americans are still very embedded in the Old Testament notion of an eye for an eye, and as long as that is true, I would not expect to see any real changes in the legal system.

The Criminal Mind

Advances in genetics and neuroscience are revolutionizing our understanding of violent behavior—as well as ideas about how to prevent and punish crime.


THE SATURDAY ESSAY
April 26, 2013
By ADRIAN RAINE

In studying brain scans of criminals, researchers are discovering tell-tale signs of violent tendencies. WSJ's Jason Bellini speaks with Professor Adrian Raine about his latest discoveries.

The scientific study of crime got its start on a cold, gray November morning in 1871, on the east coast of Italy. Cesare Lombroso, a psychiatrist and prison doctor at an asylum for the criminally insane, was performing a routine autopsy on an infamous Calabrian brigand named Giuseppe Villella. Lombroso found an unusual indentation at the base of Villella's skull. From this singular observation, he would go on to become the founding father of modern criminology.

Lombroso's controversial theory had two key points: that crime originated in large measure from deformities of the brain and that criminals were an evolutionary throwback to more primitive species. Criminals, he believed, could be identified on the basis of physical characteristics, such as a large jaw and a sloping forehead. Based on his measurements of such traits, Lombroso created an evolutionary hierarchy, with Northern Italians and Jews at the top and Southern Italians (like Villella), along with Bolivians and Peruvians, at the bottom.

These beliefs, based partly on pseudoscientific phrenological theories about the shape and size of the human head, flourished throughout Europe in the late 19th and early 20th centuries. Lombroso was Jewish and a celebrated intellectual in his day, but the theory he spawned turned out to be socially and scientifically disastrous, not least by encouraging early-20th-century ideas about which human beings were and were not fit to reproduce—or to live at all.

The racial side of Lombroso's theory fell into justifiable disrepute after the horrors of World War II, but his emphasis on physiology and brain traits has proved to be prescient. Modern-day scientists have now developed a far more compelling argument for the genetic and neurological components of criminal behavior. They have uncovered, quite literally, the anatomy of violence, at a time when many of us are preoccupied by the persistence of violent outrages in our midst.

The field of neurocriminology—using neuroscience to understand and prevent crime—is revolutionizing our understanding of what drives "bad" behavior. More than 100 studies of twins and adopted children have confirmed that about half of the variance in aggressive and antisocial behavior can be attributed to genetics. Other research has begun to pinpoint which specific genes promote such behavior.

Brain-imaging techniques are identifying physical deformations and functional abnormalities that predispose some individuals to violence. In one recent study, brain scans correctly predicted which inmates in a New Mexico prison were most likely to commit another crime after release. Nor is the story exclusively genetic: A poor environment can change the early brain and make for antisocial behavior later in life.

Most people are still deeply uncomfortable with the implications of neurocriminology. Conservatives worry that acknowledging biological risk factors for violence will result in a society that takes a soft approach to crime, holding no one accountable for his or her actions. Liberals abhor the potential use of biology to stigmatize ostensibly innocent individuals. Both sides fear any seeming effort to erode the idea of human agency and free will.

It is growing harder and harder, however, to avoid the mounting evidence. With each passing year, neurocriminology is winning new adherents, researchers and practitioners who understand its potential to transform our approach to both crime prevention and criminal justice.

The genetic basis of criminal behavior is now well established. Numerous studies have found that identical twins, who have all of their genes in common, are much more similar to each other in terms of crime and aggression than are fraternal twins, who share only 50% of their genes.

Donta Page's brain scan, left, shows the reduced functioning of the ventral prefrontal cortex—the area of the brain that helps regulate emotions and control impulses—compared to a normal brain, right.


Donta Page avoided the death penalty based in part on brain pathology. 
In a landmark 1984 study, my colleague Sarnoff Mednick found that children in Denmark who had been adopted from parents with a criminal record were more likely to become criminals in adulthood than were other adopted kids. The more offenses the biological parents had, the more likely it was that their offspring would be convicted of a crime. For biological parents who had no offenses, 13% of their sons had been convicted; for biological parents with three or more offenses, 25% of their sons had been convicted.

As for environmental factors that affect the young brain, lead is neurotoxic and particularly damages the prefrontal region, which regulates behavior. Measured lead levels in our bodies tend to peak at 21 months—an age when toddlers are apt to put their fingers into their mouths. Children generally pick up lead in soil that has been contaminated by air pollution and dumping.

Rising lead levels in the U.S. from 1950 through the 1970s neatly track increases in violence 20 years later, from the '70s through the '90s. (Violence peaks when individuals are in their late teens and early 20s.) As lead in the environment fell in the '70s and '80s—thanks in large part to the regulation of gasoline—violence fell correspondingly. No other single factor can account for both the inexplicable rise in violence in the U.S. until 1993 and the precipitous drop since then.

Lead isn't the only culprit. Other factors linked to higher aggression and violence in adulthood include smoking and drinking by the mother before birth, complications during birth and poor nutrition early in life.

Genetics and environment may work together to encourage violent behavior. One pioneering study in 2002 by Avshalom Caspi and Terrie Moffitt of Duke University genotyped over 1,000 individuals in a community in New Zealand and assessed their levels of antisocial behavior in adulthood. They found that a genotype conferring low levels of the enzyme monoamine oxidase A (MAOA), when combined with early child abuse, predisposed the individual to later antisocial behavior. Low MAOA has been linked to reduced volume in the amygdala—the emotional center of the brain—while physical child abuse can damage the frontal part of the brain, resulting in a double hit.

Brain-imaging studies have also documented impairments in offenders. Murderers, for instance, tend to have poorer functioning in the prefrontal cortex—the "guardian angel" that keeps the brakes on impulsive, disinhibited behavior and volatile emotions.

Of course, not everyone with a particular brain profile is a murderer—and not every offender fits the same mold. Those who plan their homicides, like serial killers, tend to have good prefrontal functioning. That makes sense, since they must be able to regulate their behavior carefully in order to escape detection for a long time.

So what explains coldblooded psychopathic behavior? About 1% of us are psychopaths—fearless antisocials who lack a conscience. In 2009, Yaling Yang, Robert Schug and I conducted structural brain scans on 27 psychopaths whom we had found in temporary-employment agencies in Los Angeles. All got high scores on the Psychopathy Checklist, the "gold standard" in the field, which assesses traits like lack of remorse, callousness and grandiosity. We found that, compared with 32 normal people in a control group, psychopaths had an 18% smaller amygdala, which is critical for emotions like fear and is part of the neural circuitry underlying moral decision-making. In subsequent research, Andrea Glenn and I found this same brain region to be significantly less active in psychopathic individuals when they contemplate moral issues. Psychopaths know at a cognitive level what is right and what is wrong, but they don't feel it.

What are the practical implications of all this evidence for the physical, genetic and environmental roots of violent behavior? What changes should be made in the criminal-justice system?

Let's start with two related questions: If early biological and genetic factors beyond the individual's control make some people more likely to become violent offenders than others, are these individuals fully blameworthy? And if they are not, how should they be punished?

Take the case of Donta Page, who in 1999 robbed a young woman in Denver named Peyton Tuthill, then raped her, slit her throat and killed her by plunging a kitchen knife into her chest. Mr. Page was found guilty of first-degree murder and was a prime candidate for the death penalty.

Working as an expert witness for Mr. Page's defense counsel, I brought him to a lab to assess his brain functioning. Scans revealed a distinct lack of activation in the ventral prefrontal cortex—the brain region that helps to regulate our emotions and control our impulses.

In testifying, I argued for a deep-rooted biosocial explanation for Mr. Page's violence. As his files documented, as a child he suffered from poor nutrition, severe parental neglect, sustained physical and sexual abuse, early head injuries, learning disabilities, poor cognitive functioning and lead exposure. He also had a family history of mental illness. By the age of 18, Mr. Page had been referred for psychological treatment 19 times, but he had never once received treatment. A three-judge panel ultimately decided not to have him executed, accepting our argument that a mix of biological and social factors mitigated Mr. Page's responsibility.

Mr. Page escaped the death penalty partly on the basis of brain pathology—a welcome result for those who believe that risk factors should partially exculpate socially disadvantaged offenders. But the neurocriminologist's sword is double-edged. Neurocriminology also might have told us that Mr. Page should never have been on the street in the first place. At the time he committed the murder, he had been out of prison for only four months. Sentenced to 20 years for robbery, he was released after serving just four years.

What if I had been asked to assess him just before he was released? I would have said exactly what I said in court when defending him. All the biosocial boxes were checked: He was at heightened risk for committing violence for reasons beyond his control. It wasn't exactly destiny, but he was much more likely to be impulsively violent than not.

This brings us to the second major change that may be wrought by neurocriminology: incorporating scientific evidence into decisions about which soon-to-be-released offenders are at the greatest risk for reoffending. Such risk assessment is currently based on factors like age, prior arrests and marital status. If we were to add biological and genetic information to the equation—along with recent statistical advances in forecasting—predictions about reoffending would become significantly more accurate.

In a 2013 study, Kent Kiehl of the University of New Mexico, looking at a population of 96 male offenders in the state's prison system, found that in the four years after their release, those with low activity in the anterior cingulate cortex—a brain area involved in regulating behavior—were twice as likely to commit another offense as those who had high activity in this region. Research soon to be published by Dustin Pardini of the University of Pittsburgh shows that men with a smaller amygdala are three times more likely to commit violence three years later.

Of course, if we can assess criminals for their propensity to reoffend, we can in theory assess any individual in society for his or her criminal propensity—making it possible to get ahead of the problem by stopping crime before it starts. Ultimately, we should try to reach a point where it is possible to deal with repeated acts of violence as a clinical disorder.

Randomized, controlled trials have clearly documented the efficacy of a host of medications—including stimulants, antipsychotics, antidepressants and mood stabilizers—in treating aggression in children and adolescents. Parents are understandably reluctant to have their children medicated for bad behavior, but when all else fails, treating children to stabilize their uncontrollable aggressive acts and to make them more amenable to psychological interventions is an attractive option.

Treatment doesn't have to be invasive. Randomized, controlled trials in England and the Netherlands have shown that a simple fix—omega-3 supplements in the diets of young offenders—reduces serious offending by about 35%. Studies have also found that early environmental enrichment—including better nutrition, physical exercise and cognitive stimulation—enhances later brain functioning in children and reduces adult crime.

Over the course of modern history, increasing scientific knowledge has given us deeper insights into epilepsy, psychosis and substance abuse, and has promoted a more humane perspective. Just as mental disorders were once viewed as a product of evil forces, the "evil" you see in violent offenders today may someday be reformulated as a symptom of a physiological disorder.

There is no question that neurocriminology puts us on difficult terrain, and some wish it didn't exist at all. How do we know that the bad old days of eugenics are truly over? Isn't research on the anatomy of violence a step toward a world where our fundamental human rights are lost?

We can avoid such dire outcomes. A more profound understanding of the early biological causes of violence can help us take a more empathetic, understanding and merciful approach toward both the victims of violence and the prisoners themselves. It would be a step forward in a process that should express the highest values of our civilization.

—Dr. Raine is the Richard Perry University Professor of Criminology, Psychiatry and Psychology at the University of Pennsylvania and author of The Anatomy of Violence: The Biological Roots of Crime, to be published on April 30 by Pantheon, a division of Random House.


A version of this article appeared April 27, 2013, on page C1 in the U.S. edition of The Wall Street Journal, with the headline: The Criminal Mind.

Friday, January 04, 2013

Court Rules Woman Technically Not Raped Due to an Arcane Law (1872)

This is so effed up it's almost impossible to comprehend. In a situation such as this there needs to be some mechanism through which the judges can throw out the old and outdated law and rule on the facts of the case.

This may be one of those rare instances where justice might be outside the legal system.

Court Rules Woman Technically Not Raped Because of Marital Status

Posted Jan 4, 2013



A California appeals court has decided that an 18-year-old woman technically wasn’t raped by a man who had sex with her while she was asleep because he was pretending to be her boyfriend. But if he had been her husband? The court acknowledged the outcome would have been different.

Wait, what?!

Here’s the reason: a ridiculous and arcane law from 1872 that says it would be considered rape only if the woman had been married and the man had been impersonating her husband.

“A man enters the dark bedroom of an unmarried woman after seeing her boyfriend leave late at night, and has sexual intercourse with the woman while pretending to be the boyfriend,” the court decision read. “Has the man committed rape? Because of historical anomalies in the law and the statutory definition of rape, the answer is no, even though, if the woman had been married and the man had impersonated her husband, the answer would be yes.”

Yeah, that’s all kinds of messed up.

And because it’s unclear whether the court convicted Julio Morales for having sex with a sleeping woman (which would be considered rape), or for deceiving her into thinking he was her boyfriend (which is not considered rape because the woman is not married), the 2nd District Court of Appeal in Los Angeles overturned the conviction of Morales and ruled he must be retried.

“Today’s news is such bullshit that it’s hard to process even with that in mind,” Jezebel’s Katie J.M. Baker writes Friday. “Sleeping with someone while they are sleeping is rape. Tricking someone into sleeping with you is also rape, to say the least of what that is. The definition of rape should depend on the act itself, not on the identity of the person you are impersonating. Maybe that didn’t go without saying in the Victorian Era, but it sure should now.”

Exactly.

—Posted by Tracy Bloom.

Saturday, October 13, 2012

Steve Fleming - Neuroscience and Criminality

This excellent article from Steve Fleming appeared at AEON, a very cool web magazine for those who are not familiar with it. Fleming, who blogs at The Elusive Self, takes an interesting look at how our increasing understanding of the neuroscientific foundations of human behavior are changing or will change our notions of guilt and criminality.

Was it really me?

Neuroscience is changing the meaning of criminal guilt. That might make us more, not less, responsible for our actions

| 26 September 2012

Illustration by Matt Murphy  
Illustration by Matt Murphy

Steve Fleming is a cognitive neuroscientist. He is a postdoctoral fellow at New York University and a blogger at The Elusive Self.

In the summer of 2008, police arrived at a caravan in the seaside town of Aberporth, west Wales, to arrest Brian Thomas for the murder of his wife. The night before, in a vivid nightmare, Thomas believed he was fighting off an intruder in the caravan – perhaps one of the kids who had been disturbing his sleep by revving motorbikes outside. Instead, he was gradually strangling his wife to death. When he awoke, he made a 999 call, telling the operator he was stunned and horrified by what had happened, and unaware of having committed murder.

Crimes committed by sleeping individuals are mercifully rare. Yet they provide striking examples of the unnerving potential of the human unconscious. In turn, they illuminate how an emerging science of consciousness is poised to have a deep impact upon concepts of responsibility that are central to today’s legal system.

After a short trial, the prosecution withdrew the case against Thomas. Expert witnesses agreed that he suffered from a sleep disorder known as pavor nocturnus, or night terrors, which affects around one per cent of adults and six per cent of children. His nightmares led him to do the unthinkable. We feel a natural sympathy towards Thomas, and jurors at his trial wept at his tragic situation. There is a clear sense in which this action was not the fault of an awake, thinking, sentient individual. But why do we feel this? What is it exactly that makes us think of Thomas not as a murderer but as an innocent man who has lost his wife in terrible circumstances?

Our sympathy can be understood with reference to laws that demarcate a separation between mind and body. A central tenet of the Western legal system is the concept of mens rea, or guilty mind. A necessary element to criminal responsibility is the guilty act — the actus reus. However, it is not enough simply to act: one must also be mentally responsible for acting in a particular way. The common law allows for those who are unable to conform to its requirements due to mental illness: the defence of insanity. It also allows for ‘diminished capacity’ in situations where the individual is deemed unable to form the required intent, or mens rea. Those people are understood to have control of their actions, without intending the criminal outcome. In these cases, the defendant may be found guilty of a lesser crime than murder, such as manslaughter.

In the case of Brian Thomas, the court was persuaded that his sleep disorder amounted to ‘automatism’, a comprehensive defence that denies there was even a guilty act. Automatism is the ultimate negation of both mens rea and actus reus. A successful defence of automatism implies that the accused person had neither awareness of what he was doing, nor any control over his actions. That he was so far removed from conscious awareness that he acted like a runaway machine.

The problem is how to establish if someone lacks a crucial aspect of consciousness when he commits a crime. In Thomas’s case, sleep experts provided evidence that his nightmares were responsible for his wife’s death. But in other cases, establishing lack of awareness has proved more elusive.
It is commonplace to drive a car for long periods without paying much attention to steering or changing gear. According to Jonathan Schooler, professor of psychology at the University of California, Santa Barbara, ‘we are often startled by the discovery that our minds have wandered away from the situation at hand’. But if I am unconscious of my actions when I zone out, to what degree is it really ‘me’ doing the driving?

This question takes on a more urgent note when the lives of others are at stake. In April 1990, a heavy-goods driver was steering his lorry towards Liverpool in the early evening. Having driven all day without mishap, he began to veer on to the hard shoulder of the motorway. He continued along the verge for around half a mile before he crashed into a roadside assistance van and killed two men. The driver appeared in Worcester Crown Court on charges of causing death by reckless driving. For the defence, a psychologist described to the court that ‘driving without awareness’ might occur following long, monotonous periods at the wheel. The jury was sufficiently convinced of his lack of conscious control to acquit on the basis of automatism.

The argument for a lack of consciousness here is much less straightforward than for someone who is asleep. In fact, the Court of Appeal said that the defence of automatism should not have been on the table in the first place, because a driver without ‘awareness’ still retains some control of the car. None the less, the grey area between being in control and aware on the one hand, and in control and unaware on the other, is clearly crucial for a legal notion of voluntary action.

If we accept automatism then we reduce the conscious individual to an unconscious machine. However, we should remember that all acts, whether consciously thought-out or reflexive and automatic, are the product of neural mechanisms. For centuries, scientists and inventors have been captivated by this notion of the mind as a machine. In the 18th century, Henri Maillardet, a Swiss clockmaker, built a remarkable apparatus that he christened the Automaton. An intricate array of brass cams connected to a clockwork motor made a doll produce beautiful drawings of ships and pastoral scenes on sheets of paper, as if by magic. This spookily humanoid machine, now on display at the Franklin Institute in Philadelphia, reflects the Enlightenment’s fascination with taming and understanding the mechanisms of life.

Modern neuroscience takes up where Maillardet left off. From the pattern of chemical and electrical signaling between around 85 billion brain cells, each of us experiences the world, makes decisions, daydreams, and forms friendships. The mental and the physical are two sides of the same coin. The unsettling implication is that, by revealing a physical correlate of a conscious state, we begin to treat the individual not as a person but as a machine. Perhaps we are all ‘automata’, and our notions of free choice and taking responsibility for our actions are simply illusions. There is no ghost in the machine.

In his book Incognito (2011), David Eagleman argues that society is poised to slide down the following slippery slope. Measurable brain defects already buy leniency for the defendant. As the science improves, more and more criminals will be let off the hook thanks to a fine-grained analysis of their neurobiology. ‘Currently,’ Eagleman writes, ‘we can detect only large brain tumours, but in 100 years we will be able to detect patterns at unimaginably small levels of the microcircuitry that correlate with behavioral problems.’ On this view, responsibility has no place in the courtroom. It is no longer meaningful to lock people up on the basis of their actions, because their actions can always be tied to brain function.

While it is inevitable that defence teams will look towards neuroscientific evidence to shift the balance in favour of a mechanistic, rather than a personal, interpretation of criminal acts. But we should be wary of attempts to do so. If every behaviour and mental state has a neural correlate (as surely it must), then everything we do is an artifact of our brains. A link between brain and behaviour is not enough to push responsibility out of the courtroom. Instead we need new ways of thinking about responsibility, and new ways to conceptualise a decision-making self.

Responsibility does not entail a rational, choosing self that floats free from physical processes. That is a fiction. Even so, demonstrating a link between criminal behaviour and conscious (or unconscious) states of the brain changes the legal landscape. Consciousness is, after all, central to the legal definition of intent.

In the early ’70s, the psychologist Lawrence Weiskrantz and the neuropsychologist Elizabeth Warrington discovered a remarkable patient at the National Hospital for Neurology and Neurosurgery in London. This patient, known as DB, had sustained damage to the occipital lobes (towards the rear of the brain), resulting in blindness in half of his visual field. Remarkably, DB was able to guess the position and orientation of lines in his ‘blind’ hemifield. Subsequent studies on similar patients with ‘blindsight’ confirmed that these responses relied on a neural pathway quite separate from the one that usually passes through the occipital lobes. So it appears that visual consciousness is selectively deleted in blindsight. At some level, the person can ‘see’ but is not aware of doing so.

Awareness and control, then, are curious things, and we cannot understand them without grappling with consciousness itself. What do we know about how normal, waking consciousness works? Hints are emerging. Studies by Stanislas Dehaene, professor of experimental cognitive psychology at the Collège de France in Paris, have revealed that a key difference between conscious and unconscious vision is activity in the prefrontal cortex (the front of the brain, particularly well-developed in humans). Other research implies that consciousness emerges when there is the right balance of connectivity between brain regions, known as the ‘information integration’ theory. It has been suggested that anesthesia can induce unconsciousness by disrupting the communication between brain regions.

Just as there are different levels of intent in law, there are different levels of awareness that can be identified in the lab. Despite being awake and functioning, one’s mind might be elsewhere, such as when a driver zones out or when a reader becomes engrossed. A series of innovative experiments have begun to systematically investigate mind-wandering. When participants zone out during a repetitive task, activity increases in the ‘default network’, a set of brain regions previously linked to a focus on internal thoughts rather than the external environment. Under the influence of alcohol, people become more likely to daydream and less likely to catch themselves doing so. These studies are beginning to catalogue the influences and mechanisms involved in zoning out from the external world. With their help we can refine the current legal taxonomy of mens rea and put legal ideas such as recklessness, negligence, knowledge and intent on a more scientific footing.

An increased scientific understanding of consciousness might one day help us to determine the level of intent behind particular crimes and to navigate the blurred boundary between conscious decisions and unconscious actions. At present, however, we face serious obstacles. Most studies in cognitive neuroscience rely on averaging together many individuals. A group of individuals allows us to understand the average, or typical, brain. But it does not follow that each individual in the group is typical. And even if this problem were to be overcome, it would not help us to adjudicate cases in which normal waking consciousness was intact, but happened to be impaired at the time of the crime.

Nonetheless, the brain mechanisms underpinning different levels of consciousness are central to a judgment of automatism. Without consciousness, we are justified in concluding that automatism is in play, not because consciousness itself is not also dependent on the brain, but because consciousness is associated with actions worth holding to a higher moral standard. This perspective helps to arrest the slide down Eagleman’s slippery slope. Instead of negating responsibility, neuroscience has the potential to place conscious awareness on an empirical footing, allowing greater certainty about whether a particular individual had the capacity for rational, conscious action at the time of the crime.

Some worry that an increased understanding of consciousness and voluntary action will dissolve our sense of personal responsibility and free will. In fact, neurological self-knowledge could have the opposite effect. Suppose we discover that the brain mechanisms underpinning consciousness are primed to malfunction at a particular time of day, say 7am. Up until this discovery, occasional slips and errors made around this time might have been put down to chance. But now, armed with our greater understanding of the fragility of consciousness, we would be able to put in place counter-measures to make major errors less likely. For Brian Thomas, a greater understanding of his sleep disorder might have allowed him to control it. He had stopped taking his anti-depressant medication when he was on holiday, because he believed it made him impotent. This might have contributed to the night terrors that caused him to strangle his wife.

Crucially, increased self-knowledge often percolates through to laws governing responsible behaviour. A diabetic who slips into a coma while driving is held responsible if the coma was the result of poor management of a known diabetic condition. Someone committing crimes while drunk is held to account, so long as they are responsible for becoming drunk in the first place. A science of consciousness illuminates the factors that lead to unconsciousness. In reconsidering the boundary between consciousness and automatism we will need to take into account the many levels of conscious and unconscious functioning of the brain.

Our legal system is built on a dualist view of the mind-body relationship that has served it well for centuries. Science has done little to disrupt that until now. But neuroscience is different. By directly addressing the mechanisms of the human mind, it has the potential to adjudicate on issues of capacity and intent. With a greater understanding of impairments to consciousness, we might be able to take greater control over our actions, bootstrapping ourselves up from the irrational, haphazard behaviour traditionally associated with automata. Far from eroding a sense of free will, neuroscience may allow us to inject more responsibility than ever before into our waking lives.

~ For references to the scientific research discussed in this essay, see Steve Fleming's blog The Elusive Self.