Showing posts with label reason. Show all posts
Showing posts with label reason. Show all posts

Tuesday, November 11, 2014

The One Thing that Could Save the World: Why We Need Empathy Now More than Ever

I completely disagree with Paul Bloom (psychologist at Yale) that empathy is a poor moral guide. Bloom sets empathy against rationality, which is a false dichotomy.
George Lakoff: “Empathy is at the heart of real rationality, because it goes to the heart of our values, which are the basis of our sense of justice. Empathy is the reason we have the principles of freedom and fairness, which are necessary components of justice.”
In the article below,

The one thing that could save the world: Why we need empathy now more than ever

Critics say that empathy clouds our judgment and distracts us from true morality. Here's what they're missing.




The one thing that could save the world: Why we need empathy now more than ever 
Gregory Peck and Brock Peters in "To Kill a Mockingbird" 
Empathy is trending. President Obama wants to tackle America’s “empathy deficit,” medical students routinely receive empathy training, and everyone from business gurus to the Dalai Lama have become its champions. The latest neuroscience research shows that 98 percent of us have the capacity to empathize wired into our brains and, like riding a bike, it’s a skill we can learn and develop. No wonder Google searches for the E word have more than doubled  in the past decade. The art of imaginatively stepping into another person’s shoes and seeing the world from their perspective is, it would seem, a most valuable and valued twenty-first century asset.

Not so, says Yale psychologist Paul Bloom, leading the counter-charge against empathy’s popularity surge. It is, he claims, a poor moral guide, lacking the power to inspire us to act on, say, child poverty or humanitarian disasters. “Our public decisions will be fairer and more moral once we put empathy aside,” says Bloom, insisting we should instead, “draw on a reasoned, even counter-empathetic, analysis of moral obligation.” But in doing so, Bloom creates a false – and dangerous – dichotomy between empathy and reason, and misses the long lesson from history: that time and again, empathy has played a crucial role in creating a democratic culture that respects human rights. So where have the critics gone wrong?

The anti-empathy brigade launch their attack with the claim that having too much empathy can lead to what Bloom calls “empathetic distress” or burnout. Yes, identifying closely with someone else’s pain and mirroring their emotions – known as affective empathy – can seem overwhelming. But according to altruism expert Daniel Batson  at the University of Kansas, there is no scientific evidence that those with high levels of affective empathy are less able to respond to other people’s needs: they are not paralyzed by their sensitivity – indeed many may be motivated by it. Moreover, people who need to keep a cool head when the emotional heat is rising – be they doctors, firefighters or social workers – know to draw, instead, on their capacity for cognitive empathy, an ability Bloom too easily sidelines. And there’s a crucial distinction between the two.

If affective empathy is our mirror for reflecting others’ emotions, cognitive empathy is, by contrast, a pair of shoes that invites us to imagine the world from their viewpoint. So the smart doctor aims not to feel her patient’s anxiety, but to understand it, so that she can respond appropriately. Every good parent teaches cognitive empathy to their kids: ‘Imagine how you’d feel if someone did that to you,’ we tell them as a first step in their moral education. Trying to understand others’ perspectives is an essential part of our emotional intelligence toolkit, and it matters all the more if their lives and needs differ from our own. As George Bernard Shaw quipped, ‘Do not do unto others as you would have them do unto you – they might have different tastes.’ Our cognitive empathy enables us to discover those different tastes.

A second charge against empathy is that it fails at a distance: we empathize more easily with people in our backyards, say the critics, so help our neighbors while ignoring earthquake victims overseas. This is muddled thinking. Proximity is clearly no guarantee of care: we can stroll past a homeless person on our street just as we can be stopped in our tracks by a news story about a woman in Japan left homeless by an earthquake. The real question is how to give people a human face, whether they are near or far, so we get beyond abstract statistics and stereotypes and can make an emotional connection with their lived reality. Without empathy, we could never explain the massive rise in humanitarian giving by individuals to developing countries since the end of World War II.

The strongest critique in the empathy wars is the risk of empathic bias: the concern that we are partial towards our in-group – people of a similar socioeconomic or cultural background to our own. Think of the judge who gives a more lenient sentence to a white-collar criminal whose educational background resembles his own. Empathic bias is real and it matters – but it means that we need to deepen, not discard, our empathy, by escaping the boundaries of our peer group. That’s just what the Eton-educated writer George Orwell did when, in the late 1920s, he swapped his natty suit for tramping clothes and lived amongst hobos and beggars on the streets of East London, an empathic immersion described in “Down and Out in Paris and London” that exploded his prejudices about the homeless. We might all learn from his example and get talking to the strangers on our doorsteps, whether it’s your new Afghani neighbor or the heavily tattooed woman who delivers your mail each day.

Our empathy is powerful but it is clearly not perfect. So should the critiques of Bloom and others convince us to cast our innate empathic abilities aside and rely, instead, on reason as our moral compass? This would be a monumental mistake, proving us blind to the lessons of history. Reason divorced from empathy was a specialty of the Nazis, who used reason to argue that Jews were subhuman and then codified it in the Nuremberg Laws. What made the Holocaust possible was the Nazis’ racial ideology that achieved one of the most successful erosions of societal empathy in political history.

There are positive lessons from history too. A growing wave of scholars, from the cultural historian Lynn Hunt to the arch-rational psychologist Steven Pinker, argue that democratic rights have been won when societies have extended their empathy to previously neglected social groups. Look no further than the 18th century humanitarian revolution, which generated the first campaigns to tackle child poverty, the anti-slavery movement and associations to improve working conditions. In “The Better Angels of Our Nature,” Pinker points out it was rooted in ‘the rise of empathy and the regard for human life’, underpinned by the ‘reading revolution’ as literature opened up imaginations to previously hidden lives. When the public became sensitized to the suffering of marginalized groups, it spurred legislative reform. And this story has been repeated in struggles throughout democratic history, from women’s suffrage to gay rights and disability rights.

If there is one lesson that history teaches us, it is this: empathy cracks open the door of our moral concern, and laws and rights wedge that door open. Reason – embodied in laws and rights – and empathy are not, as the critics contend, polar opposites. They are in fact a democratic double act: like knife and fork, ball and socket, Fred and Ginger, they work best when they work together. As the cognitive linguist George Lakoff puts it, “Empathy is at the heart of real rationality, because it goes to the heart of our values, which are the basis of our sense of justice. Empathy is the reason we have the principles of freedom and fairness, which are necessary components of justice.”

Cast empathy aside to lean on reason alone and we would become emotionally tone deaf and politically indifferent. That is not who we want to be and – more importantly – it is not who we are.

Roman Krznaric’s new book is Empathy: Why It Matters, and How to Get It (Perigee/Penguin, on sale Nov. 4). He is a faculty member of The School of Life in London and founder of the world’s first digital Empathy Library. Follow him on Twitter at @romankrznaric.

Sunday, May 04, 2014

Documentary - The Human Brain (HD)


This is a nice documentary that serves as an introduction to the brain for non-specialists. They cover a lot of ground, so there is not much depth.

The Human Brain (HD full documentary)

Published on Dec 8, 2013


Using simple analogies, real-life case studies, and state-of-the-art CGI, this special shows how the brain works, explains the frequent battle between instinct and reason, and unravels the mysteries of memory and decision-making. It takes us inside the mind of a soldier under fire to see how decisions are made in extreme situations, examines how an autistic person like Rain Man develops remarkable skills, and takes on the age-old question of what makes one person good and another evil. Research is rushing forward. We've learned more about the workings of the brain in the last five years than in the previous one hundred.

Tuesday, March 18, 2014

Steven Pinker, Rebecca Newberger Goldstein: The Long Reach of Reason

The Long Reach of Reason - Steven Pinker, Rebecca Newberger Goldstein



Here's a TED first: an animated Socratic dialog! In a time when irrationality seems to rule both politics and culture, has reasoned thinking finally lost its power? Watch as psychologist Steven Pinker is gradually, brilliantly persuaded by philosopher Rebecca Newberger Goldstein that reason is actually the key driver of human moral progress, even if its effect sometimes takes generations to unfold. The dialog was recorded live at TED, and animated, in incredible, often hilarious, detail by Cognitive.

This talk was presented at an official TED Conference. TED's editors featured it among our daily selections on the home page.


Steven Pinker - Linguist
Linguist Steven Pinker questions the very nature of our thoughts — the way we use words, how we learn, and how we relate to others. In his best-selling books, he has brought sophisticated language analysis to bear on topics of wide general interest. Full bio


Rebecca Newberger Goldstein - Philosopher and writer
Rebecca Newberger Goldstein writes novels and nonfiction that explore questions of philosophy, morality and being. Full bio
* * * * *

Why this might just be the most persuasive TED Talk ever posted


Posted by: Chris Anderson
March 17, 2014


In today’s talk, “The Long Reach of Reason,” Steven Pinker and Rebecca Newberger Goldstein have been animated by RSA.
I want to give you the back story behind today’s TED Talk and make the case that it’s one of the most significant we’ve ever posted. And I’m not just talking about its incredible animation. I’m talking about its core idea.

Two years ago the psychologist Steven Pinker and the philosopher Rebecca Newberger Goldstein, who are married, came to TED to take part in a form of Socratic dialog.

She sought to argue that Reason was a much more powerful force in history than it’s normally given credit for. He initially defended the modern consensus among psychologists and neurologists, that most human behavior is best explained through other means: unconscious instincts of various kinds. But over the course of the dialog, he is persuaded by her, and together they look back through history and see how reasoned arguments ended up having massive impacts, even if those impacts sometimes took centuries to unfold.

The script was clever, the argument powerful. However on the day, they bombed. And I’m mainly to blame.

You see, we gambled that year on seeking to expand our repertoire of presentation formats. Their dialog appeared in a session we called “The Dinner Party.” The idea was that all the speakers at the session would be seated around a table. They would individually give their talks, then come sit back down with the others to debate the talk, and everyone would end up the wiser. Seemed like an interesting idea at the time. But it didn’t work. Somehow the chemistry of the dinner guests never ignited. And perhaps the biggest reason for that was that I, as head of the table trying to moderate the conversation, had my back to the audience. The audience disengaged, the evening fell flat, and Steve and Rebecca’s dialog, which also suffered from some audio issues, was rated too low for us to consider posting it online.


At TED2012, Steven Pinker and Rebecca Newberger Goldstein explored how reason shaped human history. We’ve animated the talk to bring new life to this important idea. Photo: James Duncan Davidson
That would normally have been the end of it. Except that a strange thing happened. I could not get their core idea out of my head. The more I thought about it, the more I realized that TED’s entire mission rested on the premise that ideas really matter. And unless reasoned argument is the prime tool shaping those ideas, they can warp into pretty much anything, good or bad.

And so I tried to figure if there was a way to rescue the talk. And it turned out that there was. It came in the shape of Andrew Park, who, in my humble-but-true opinion is the world’s greatest animator of concepts. His RSA Animate series has notched up millions of views for sometimes difficult topics, and we have worked with him before to animate talks from Denis Dutton and some of our TED-Ed lessons (including one from yours truly on Questions No One Knows the Answer To.) If he could make me interesting, he sure as hell could do so for Pinker and Goldstein.

And so it turned out. Andrew and his amazing team at Cognitive fixed the audio issue and turned the entire talk into an animated movie of such imagination, humor and, most of all, explanatory power, it took my breath away.

And so here it is. The Long Reach of Reason. A talk in animated dialog form, arguing that Reason is capable of extending its influence across centuries, making it the single most powerful driver of long-term change. Please watch it. A) you’ll be blown away by how it’s animated. B) it may change forever how you think about Reason. And that’s a good thing.

It is a delicious example in favor of the talk’s conclusions that it was the power of its own arguments that kept it alive and turned it into a animation capable of far greater reach than the original.

For me, the argument in this talk is ultimately a profoundly optimistic one. If it turns out to be valid, then there really can be such a thing in the world as moral progress. People are genuinely capable of arguing each other into new beliefs, new mindsets that ultimately will benefit humanity. If you think that’s unlikely, watch the talk. You might just find yourself reasoned to a different opinion.


An experiment I will never try again: hosting a session with my back to the audience. Photo: James Duncan Davidson

Friday, February 28, 2014

Omnivore - What Neuroscience Is Learning

From Bookforum's Omnivore blog, this is cool collection of links related to mind and consciousness, including several pieces on free will.

One of the highlights of this collection is a review by John Jeffery and Todd K. Shackelford (from Evolutionary Psychology, 2013. 11(5): 1077-1083) of Daniel C. Dennett’s Intuition Pumps and Other Tools for Thinking and Nicholas Humphrey’s Soul Dust: The Magic of Consciousness. Here is part of the review:
Humphrey, like Dennett, dispels the more romantic notions of consciousness as the gift of a spirit, but he cleverly reappropriates the terminology of spirituality to align with a scientific vision of consciousness. Both he and Dennett employ tactical vocabularies, wishing to make their arguments more appetizing to a general audience and, in Dennett’s case for compatibilism, safe for consumption. Most importantly, Soul Dust and Intuition Pumps work to collapse the irresistible assumption that we live in a phenomenological reality outside of our biology, while simultaneously celebrating and elevating its significance.
Enjoy the other links, as well.

What neuroscience is learning

Feb 27 2014  
9:00AM

Friday, October 25, 2013

The Wright Show - Robert Wright Talks with Joshua Greene (author of Moral Tribes)


On last week's episode of The Wright Show, Robert Wright spoke with Joshua Greene about his new book, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them (Oct, 2013). Here is a synopsis of the book from Amazon:
October 31, 2013  
A pathbreaking neuroscientist reveals how our social instincts turn Me into Us, but turn Us against Them—and what we can do about it

Our brains were designed for tribal life, for getting along with a select group of others (Us) and for fighting off everyone else (Them). But modern times have forced the world’s tribes into a shared space, resulting in epic clashes of values along with unprecedented opportunities. As the world shrinks, the moral lines that divide us become more salient and more puzzling. We fight over everything from tax codes to gay marriage to global warming, and we wonder where, if at all, we can find our common ground.

A grand synthesis of neuroscience, psychology, and philosophy, Moral Tribes reveals the underlying causes of modern conflict and lights the way forward. Greene compares the human brain to a dual-mode camera, with point-and-shoot automatic settings (“portrait,” “landscape”) as well as a manual mode. Our point-and-shoot settings are our emotions—efficient, automated programs honed by evolution, culture, and personal experience. The brain’s manual mode is its capacity for deliberate reasoning, which makes our thinking flexible. Point-and-shoot emotions make us social animals, turning Me into Us. But they also make us tribal animals, turning Us against Them. Our tribal emotions make us fight—sometimes with bombs, sometimes with words—often with life-and-death stakes.

An award-winning teacher and scientist, Greene directs Harvard University’s Moral Cognition Lab, which uses cutting-edge neuroscience and cognitive techniques to understand how people really make moral decisions. Combining insights from the lab with lessons from decades of social science and centuries of philosophy, the great question of Moral Tribes is this: How can we get along with Them when what they want feels so wrong to Us?

Ultimately, Greene offers a set of maxims for navigating the modern moral terrain, a practical road map for solving problems and living better lives. Moral Tribes shows us when to trust our instincts, when to reason, and how the right kind of reasoning can move us forward.

A major achievement from a rising star in a new scientific field, Moral Tribes will refashion your deepest beliefs about how moral thinking works and how it can work better.
Here's the talk.

The Wright Show

Oct 23, 2013 | Robert Wright & Joshua Greene


Robert Wright (Bloggingheads.tv, The Evolution of God, Nonzero) and Joshua Greene (Harvard University)




Recorded: Oct 22 Posted: Oct 23, 2013
Download: wmv | mp4 | mp3 | fast mp3

Links Mentioned

Joshua's new book, "Moral Tribes"
Bob's review of the book
Daniel Kahneman's book, "Thinking, Fast and Slow"
Bob's book, "The Moral Animal"

Thursday, August 08, 2013

Steven Pinker - Science Is Not Your Enemy (The New Republic)


In this article from evolutionary psychologist Steven Pinker, published in The New Republic, he offers a new definition of scientism, and rants against those who question the supremacy of science as the only "real" tool for making sense of the world and ourselves.

Philip Kitcher, also writing for The New Republic, offered his own definition of scientism, or at least of the viewpoint that rejects scientism:
the insistence that some questions are beyond the scope of natural scientific inquiry, too large, too complex, too imprecise, and too important to be addressed by blundering over-simplifications.
Kitcher goes on . . . 
The enthusiasm for natural scientific imperialism rests on five observations. First, there is the sense that the humanities and social sciences are doomed to deliver a seemingly directionless sequence of theories and explanations, with no promise of additive progress. Second, there is the contrasting record of extraordinary success in some areas of natural science. Third, there is the explicit articulation of technique and method in the natural sciences, which fosters the conviction that natural scientists are able to acquire and combine evidence in particularly rigorous ways. Fourth, there is the perception that humanists and social scientists are only able to reason cogently when they confine themselves to conclusions of limited generality: insofar as they aim at significant—general—conclusions, their methods and their evidence are unrigorous. Finally, there is the commonplace perception that the humanities and social sciences have been dominated, for long periods of their histories, by spectacularly false theories, grand doctrines that enjoy enormous popularity until fashion changes, as their glaring shortcomings are disclosed.
As a point of reference, here is brief definition of scientism from Wikipedia:
Scientism is a term used, usually pejoratively,[1][2][3] to refer to belief in the universal applicability of the scientific method and approach, and the view that empirical science constitutes the most authoritative worldview or most valuable part of human learning to the exclusion of other viewpoints.[4] It has been defined as "the view that the characteristic inductive methods of the natural sciences are the only source of genuine factual knowledge and, in particular, that they alone can yield true knowledge about man and society."[5] An individual who subscribes to scientism is referred to as a scientismist.[6][7][8][9][10] The term scientism frequently implies a critique of the more extreme expressions of logical positivism[11][12] and has been used by social scientists such as Friedrich Hayek,[13] philosophers of science such as Karl Popper,[14] and philosophers such as Hilary Putnam[15] and Tzvetan Todorov[16] to describe the dogmatic endorsement of scientific methodology and the reduction of all knowledge to only that which is measurable.[17]
And here is Pinker's new definition (that he is willing to defend):, which bears no resemblance to the scientism most of us know and loathe:
Scientism, in this good sense, is not the belief that members of the occupational guild called “science” are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable. Scientism does not mean that all current scientific hypotheses are true; most new ones are not, since the cycle of conjecture and refutation is the lifeblood of science. It is not an imperialistic drive to occupy the humanities; the promise of science is to enrich and diversify the intellectual tools of humanistic scholarship, not to obliterate them. And it is not the dogma that physical stuff is the only thing that exists. Scientists themselves are immersed in the ethereal medium of information, including the truths of mathematics, the logic of their theories, and the values that guide their enterprise. In this conception, science is of a piece with philosophy, reason, and Enlightenment humanism. It is distinguished by an explicit commitment to two ideals, and it is these that scientism seeks to export to the rest of intellectual life.
If this were what scientism looks like out in the world, I would be a huge supporter. However, there are many scientists who believe that science and the scientific method are the only viable worldview. More specifically, scientism is the belief that all things can be studied as objects, through objective measurements, and if they can not be studied in this way, they either do not objectively exist or they are of no interest to science. Ken Wilber (this link will download an excellent "yellow paper" by Michael Fisher, Ph.D.) famously called this view Flatland, in reference to the classic science fiction novel (1890).

How is this working in the field of consciousness? I would argue not so well.

Science Is Not Your Enemy

An impassioned plea to neglected novelists, embattled professors, and tenure-less historians


BY STEVEN PINKER
The New Republic

The great thinkers of the Age of Reason and the Enlightenment were scientists. Not only did many of them contribute to mathematics, physics, and physiology, but all of them were avid theorists in the sciences of human nature. They were cognitive neuroscientists, who tried to explain thought and emotion in terms of physical mechanisms of the nervous system. They were evolutionary psychologists, who speculated on life in a state of nature and on animal instincts that are “infused into our bosoms.” And they were social psychologists, who wrote of the moral sentiments that draw us together, the selfish passions that inflame us, and the foibles of shortsightedness that frustrate our best-laid plans.

These thinkers—Descartes, Spinoza, Hobbes, Locke, Hume, Rousseau, Leibniz, Kant, Smith—are all the more remarkable for having crafted their ideas in the absence of formal theory and empirical data. The mathematical theories of information, computation, and games had yet to be invented. The words “neuron,” “hormone,” and “gene” meant nothing to them. When reading these thinkers, I often long to travel back in time and offer them some bit of twenty-first-century freshman science that would fill a gap in their arguments or guide them around a stumbling block. What would these Fausts have given for such knowledge? What could they have done with it?

We don’t have to fantasize about this scenario, because we are living it. We have the works of the great thinkers and their heirs, and we have scientific knowledge they could not have dreamed of. This is an extraordinary time for the understanding of the human condition. Intellectual problems from antiquity are being illuminated by insights from the sciences of mind, brain, genes, and evolution. Powerful tools have been developed to explore them, from genetically engineered neurons that can be controlled with pinpoints of light to the mining of “big data” as a means of understanding how ideas propagate.

One would think that writers in the humanities would be delighted and energized by the efflorescence of new ideas from the sciences. But one would be wrong. Though everyone endorses science when it can cure disease, monitor the environment, or bash political opponents, the intrusion of science into the territories of the humanities has been deeply resented. Just as reviled is the application of scientific reasoning to religion; many writers without a trace of a belief in God maintain that there is something unseemly about scientists weighing in on the biggest questions. In the major journals of opinion, scientific carpetbaggers are regularly accused of determinism, reductionism, essentialism, positivism, and worst of all, something called “scientism.” The past couple years have seen four denunciations of scientism in this magazine alone, together with attacks in Bookforum, The Claremont Review of Books, The Huffington Post, The Nation, National Review Online, The New Atlantis, The New York Times, and Standpoint.

The eclectic politics of these publications reflects the bipartisan nature of the resentment. This passage, from a 2011 review in The Nation of three books by Sam Harris by the historian Jackson Lears, makes the standard case for the prosecution by the left:
Positivist assumptions provided the epistemological foundations for Social Darwinism and pop-evolutionary notions of progress, as well as for scientific racism and imperialism. These tendencies coalesced in eugenics, the doctrine that human well-being could be improved and eventually perfected through the selective breeding of the "fit" and the sterilization or elimination of the "unfit." ... Every schoolkid knows about what happened next: the catastrophic twentieth century. Two world wars, the systematic slaughter of innocents on an unprecedented scale, the proliferation of unimaginable destructive weapons, brush fire wars on the periphery of empire—all these events involved, in various degrees, the application of scientific research to advanced technology.
The case from the right, captured in this 2007 speech from Leon Kass, George W. Bush’s bioethics adviser, is just as measured:
Scientific ideas and discoveries about living nature and man, perfectly welcome and harmless in themselves, are being enlisted to do battle against our traditional religious and moral teachings, and even our self-understanding as creatures with freedom and dignity. A quasi-religious faith has sprung up among us—let me call it "soul-less scientism"—which believes that our new biology, eliminating all mystery, can give a complete account of human life, giving purely scientific explanations of human thought, love, creativity, moral judgment, and even why we believe in God. ... Make no mistake. The stakes in this contest are high: at issue are the moral and spiritual health of our nation, the continued vitality of science, and our own self-understanding as human beings and as children of the West.
These are zealous prosecutors indeed. But their cases are weak. The mindset of science cannot be blamed for genocide and war and does not threaten the moral and spiritual health of our nation. It is, rather, indispensable in all areas of human concern, including politics, the arts, and the search for meaning, purpose, and morality.

The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine. Sometimes it is equated with lunatic positions, such as that “science is all that matters” or that “scientists should be entrusted to solve all problems.” Sometimes it is clarified with adjectives like “simplistic,” “naïve,” and “vulgar.” The definitional vacuum allows me to replicate gay activists’ flaunting of “queer” and appropriate the pejorative for a position I am prepared to defend.

Scientism, in this good sense, is not the belief that members of the occupational guild called “science” are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable. Scientism does not mean that all current scientific hypotheses are true; most new ones are not, since the cycle of conjecture and refutation is the lifeblood of science. It is not an imperialistic drive to occupy the humanities; the promise of science is to enrich and diversify the intellectual tools of humanistic scholarship, not to obliterate them. And it is not the dogma that physical stuff is the only thing that exists. Scientists themselves are immersed in the ethereal medium of information, including the truths of mathematics, the logic of their theories, and the values that guide their enterprise. In this conception, science is of a piece with philosophy, reason, and Enlightenment humanism. It is distinguished by an explicit commitment to two ideals, and it is these that scientism seeks to export to the rest of intellectual life.

Cordover Collection, LLC

The first is that the world is intelligible. The phenomena we experience may be explained by principles that are more general than the phenomena themselves. These principles may in turn be explained by more fundamental principles, and so on. In making sense of our world, there should be few occasions in which we are forced to concede “It just is” or “It’s magic” or “Because I said so.” The commitment to intelligibility is not a matter of brute faith, but gradually validates itself as more and more of the world becomes explicable in scientific terms. The processes of life, for example, used to be attributed to a mysterious élan vital; now we know they are powered by chemical and physical reactions among complex molecules.

Demonizers of scientism often confuse intelligibility with a sin called reductionism. But to explain a complex happening in terms of deeper principles is not to discard its richness. No sane thinker would try to explain World War I in the language of physics, chemistry, and biology as opposed to the more perspicuous language of the perceptions and goals of leaders in 1914 Europe. At the same time, a curious person can legitimately ask why human minds are apt to have such perceptions and goals, including the tribalism, overconfidence, and sense of honor that fell into a deadly combination at that historical moment.

The second ideal is that the acquisition of knowledge is hard. The world does not go out of its way to reveal its workings, and even if it did, our minds are prone to illusions, fallacies, and superstitions. Most of the traditional causes of belief—faith, revelation, dogma, authority, charisma, conventional wisdom, the invigorating glow of subjective certainty—are generators of error and should be dismissed as sources of knowledge. To understand the world, we must cultivate work-arounds for our cognitive limitations, including skepticism, open debate, formal precision, and empirical tests, often requiring feats of ingenuity. Any movement that calls itself “scientific” but fails to nurture opportunities for the falsification of its own beliefs (most obviously when it murders or imprisons the people who disagree with it) is not a scientific movement.

In which ways, then, does science illuminate human affairs? Let me start with the most ambitious: the deepest questions about who we are, where we came from, and how we define the meaning and purpose of our lives. This is the traditional territory of religion, and its defenders tend to be the most excitable critics of scientism. They are apt to endorse the partition plan proposed by Stephen Jay Gould in his worst book, Rocks of Ages, according to which the proper concerns of science and religion belong to “non-overlapping magisteria.” Science gets the empirical universe; religion gets the questions of moral meaning and value.

Unfortunately, this entente unravels as soon as you begin to examine it. The moral worldview of any scientifically literate person—one who is not blinkered by fundamentalism—requires a radical break from religious conceptions of meaning and value.

To begin with, the findings of science entail that the belief systems of all the world’s traditional religions and cultures—their theories of the origins of life, humans, and societies—are factually mistaken. We know, but our ancestors did not, that humans belong to a single species of African primate that developed agriculture, government, and writing late in its history. We know that our species is a tiny twig of a genealogical tree that embraces all living things and that emerged from prebiotic chemicals almost four billion years ago. We know that we live on a planet that revolves around one of a hundred billion stars in our galaxy, which is one of a hundred billion galaxies in a 13.8-billion-year-old universe, possibly one of a vast number of universes. We know that our intuitions about space, time, matter, and causation are incommensurable with the nature of reality on scales that are very large and very small. We know that the laws governing the physical world (including accidents, disease, and other misfortunes) have no goals that pertain to human well-being. There is no such thing as fate, providence, karma, spells, curses, augury, divine retribution, or answered prayers—though the discrepancy between the laws of probability and the workings of cognition may explain why people believe there are. And we know that we did not always know these things, that the beloved convictions of every time and culture may be decisively falsified, doubtless including some we hold today.

In other words, the worldview that guides the moral and spiritual values of an educated person today is the worldview given to us by science. Though the scientific facts do not by themselves dictate values, they certainly hem in the possibilities. By stripping ecclesiastical authority of its credibility on factual matters, they cast doubt on its claims to certitude in matters of morality. The scientific refutation of the theory of vengeful gods and occult forces undermines practices such as human sacrifice, witch hunts, faith healing, trial by ordeal, and the persecution of heretics. The facts of science, by exposing the absence of purpose in the laws governing the universe, force us to take responsibility for the welfare of ourselves, our species, and our planet. For the same reason, they undercut any moral or political system based on mystical forces, quests, destinies, dialectics, struggles, or messianic ages. And in combination with a few unexceptionable convictions— that all of us value our own welfare and that we are social beings who impinge on each other and can negotiate codes of conduct—the scientific facts militate toward a defensible morality, namely adhering to principles that maximize the flourishing of humans and other sentient beings. This humanism, which is inextricable from a scientific understanding of the world, is becoming the de facto morality of modern democracies, international organizations, and liberalizing religions, and its unfulfilled promises define the moral imperatives we face today.

Moreover, science has contributed—directly and enormously—to the fulfillment of these values. If one were to list the proudest accomplishments of our species (setting aside the removal of obstacles we set in our own path, such as the abolition of slavery and the defeat of fascism), many would be gifts bestowed by science.

The most obvious is the exhilarating achievement of scientific knowledge itself. We can say much about the history of the universe, the forces that make it tick, the stuff we’re made of, the origin of living things, and the machinery of life, including our own mental life. Better still, this understanding consists not in a mere listing of facts, but in deep and elegant principles, like the insight that life depends on a molecule that carries information, directs metabolism, and replicates itself.

Science has also provided the world with images of sublime beauty: stroboscopically frozen motion, exotic organisms, distant galaxies and outer planets, fluorescing neural circuitry, and a luminous planet Earth rising above the moon’s horizon into the blackness of space. Like great works of art, these are not just pretty pictures but prods to contemplation, which deepen our understanding of what it means to be human and of our place in nature.

And contrary to the widespread canard that technology has created a dystopia of deprivation and violence, every global measure of human flourishing is on the rise. The numbers show that after millennia of near-universal poverty, a steadily growing proportion of humanity is surviving the first year of life, going to school, voting in democracies, living in peace, communicating on cell phones, enjoying small luxuries, and surviving to old age. The Green Revolution in agronomy alone saved a billion people from starvation. And if you want examples of true moral greatness, go to Wikipedia and look up the entries for “smallpox” and “rinderpest” (cattle plague). The definitions are in the past tense, indicating that human ingenuity has eradicated two of the cruelest causes of suffering in the history of our kind.

Though science is beneficially embedded in our material, moral, and intellectual lives, many of our cultural institutions, including the liberal arts programs of many universities, cultivate a philistine indifference to science that shades into contempt. Students can graduate from elite colleges with a trifling exposure to science. They are commonly misinformed that scientists no longer care about truth but merely chase the fashions of shifting paradigms. A demonization campaign anachronistically impugns science for crimes that are as old as civilization, including racism, slavery, conquest, and genocide.

Just as common, and as historically illiterate, is the blaming of science for political movements with a pseudoscientific patina, particularly Social Darwinism and eugenics. Social Darwinism was the misnamed laissez-faire philosophy of Herbert Spencer. It was inspired not by Darwin’s theory of natural selection, but by Spencer’s Victorian-era conception of a mysterious natural force for progress, which was best left unimpeded. Today the term is often used to smear any application of evolution to the understanding of human beings. Eugenics was the campaign, popular among leftists and progressives in the early decades of the twentieth century, for the ultimate form of social progress, improving the genetic stock of humanity. Today the term is commonly used to assail behavioral genetics, the study of the genetic contributions to individual differences.

I can testify that this recrimination is not a relic of the 1990s science wars. When Harvard reformed its general education requirement in 2006 to 2007, the preliminary task force report introduced the teaching of science without any mention of its place in human knowledge: “Science and technology directly affect our students in many ways, both positive and negative: they have led to life-saving medicines, the internet, more efficient energy storage, and digital entertainment; they also have shepherded nuclear weapons, biological warfare agents, electronic eavesdropping, and damage to the environment.” This strange equivocation between the utilitarian and the nefarious was not applied to other disciplines. (Just imagine motivating the study of classical music by noting that it both generates economic activity and inspired the Nazis.) And there was no acknowledgment that we might have good reasons to prefer science and know-how over ignorance and superstition.

At a 2011 conference, another colleague summed up what she thought was the mixed legacy of science: the eradication of smallpox on the one hand; the Tuskegee syphilis study on the other. (In that study, another bloody shirt in the standard narrative about the evils of science, public-health researchers beginning in 1932 tracked the progression of untreated, latent syphilis in a sample of impoverished African Americans.) The comparison is obtuse. It assumes that the study was the unavoidable dark side of scientific progress as opposed to a universally deplored breach, and it compares a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century, in perpetuity.

A major goad for the recent denunciations of scientism has been the application of neuroscience, evolution, and genetics to human affairs. Certainly many of these applications are glib or wrong, and they are fair game for criticism: scanning the brains of voters as they look at politicians’ faces, attributing war to a gene for aggression, explaining religion as an evolutionary adaptation to bond the group. Yet it’s not unheard of for intellectuals who are innocent of science to advance ideas that are glib or wrong, and no one is calling for humanities scholars to go back to their carrels and stay out of discussions of things that matter. It is a mistake to use a few wrongheaded examples as an excuse to quarantine the sciences of human nature from our attempt to understand the human condition.

Take our understanding of politics. “What is government itself,” asked James Madison, “but the greatest of all reflections on human nature?” The new sciences of the mind are reexamining the connections between politics and human nature, which were avidly discussed in Madison’s time but submerged during a long interlude in which humans were assumed to be blank slates or rational actors. Humans, we are increasingly appreciating, are moralistic actors, guided by norms and taboos about authority, tribe, and purity, and driven by conflicting inclinations toward revenge and reconciliation. These impulses ordinarily operate beneath our conscious awareness, but in some circumstances they can be turned around by reason and debate. We are starting to grasp why these moralistic impulses evolved; how they are implemented in the brain; how they differ among individuals, cultures, and sub- cultures; and which conditions turn them on and off.

The application of science to politics not only enriches our stock of ideas, but also offers the means to ascertain which of them are likely to be correct. Political debates have traditionally been deliberated through case studies, rhetoric, and what software engineers call HiPPO (highest-paid person’s opinion). Not surprisingly, the controversies have careened without resolution. Do democracies fight each other? What about trading partners? Do neighboring ethnic groups inevitably play out ancient hatreds in bloody conflict? Do peacekeeping forces really keep the peace? Do terrorist organizations get what they want? How about Gandhian nonviolent movements? Are post-conflict reconciliation rituals effective at preventing the renewal of conflict?

History nerds can adduce examples that support either answer, but that does not mean the questions are irresolvable. Political events are buffeted by many forces, so it’s possible that a given force is potent in general but submerged in a particular instance. With the advent of data science—the analysis of large, open-access data sets of numbers or text—signals can be extracted from the noise and debates in history and political science resolved more objectively. As best we can tell at present, the answers to the questions listed above are (on average, and all things being equal) no, no, no, yes, no, yes, and yes.

The humanities are the domain in which the intrusion of science has produced the strongest recoil. Yet it is just that domain that would seem to be most in need of an infusion of new ideas. By most accounts, the humanities are in trouble. University programs are downsizing, the next generation of scholars is un- or underemployed, morale is sinking, students are staying away in droves. No thinking person should be indifferent to our society’s disinvestment from the humanities, which are indispensable to a civilized democracy.

Diagnoses of the malaise of the humanities rightly point to anti-intellectual trends in our culture and to the commercialization of our universities. But an honest appraisal would have to acknowledge that some of the damage is self-inflicted. The humanities have yet to recover from the disaster of postmodernism, with its defiant obscurantism, dogmatic relativism, and suffocating political correctness. And they have failed to define a progressive agenda. Several university presidents and provosts have lamented to me that when a scientist comes into their office, it’s to announce some exciting new research opportunity and demand the resources to pursue it. When a humanities scholar drops by, it’s to plead for respect for the way things have always been done.

Those ways do deserve respect, and there can be no replacement for the varieties of close reading, thick description, and deep immersion that erudite scholars can apply to individual works. But must these be the only paths to understanding? A consilience with science offers the humanities countless possibilities for innovation in understanding. Art, culture, and society are products of human brains. They originate in our faculties of perception, thought, and emotion, and they cumulate and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win. The humanities would enjoy more of the explanatory depth of the sciences, to say nothing of the kind of a progressive agenda that appeals to deans and donors. The sciences could challenge their theories with the natural experiments and ecologically valid phenomena that have been so richly characterized by humanists.

In some disciplines, this consilience is a fait accompli. Archeology has grown from a branch of art history to a high-tech science. Linguistics and the philosophy of mind shade into cognitive science and neuroscience.

Similar opportunities are there for the exploring. The visual arts could avail themselves of the explosion of knowledge in vision science, including the perception of color, shape, texture, and lighting, and the evolutionary aesthetics of faces and landscapes. Music scholars have much to discuss with the scientists who study the perception of speech and the brain’s analysis of the auditory world.

As for literary scholarship, where to begin? John Dryden wrote that a work of fiction is “a just and lively image of human nature, representing its passions and humours, and the changes of fortune to which it is subject, for the delight and instruction of mankind.” Linguistics can illuminate the resources of grammar and discourse that allow authors to manipulate a reader’s imaginary experience. Cognitive psychology can provide insight about readers’ ability to reconcile their own consciousness with those of the author and characters. Behavioral genetics can update folk theories of parental influence with discoveries about the effects of genes, peers, and chance, which have profound implications for the interpretation of biography and memoir—an endeavor that also has much to learn from the cognitive psychology of memory and the social psychology of self-presentation. Evolutionary psychologists can distinguish the obsessions that are universal from those that are exaggerated by a particular culture and can lay out the inherent conflicts and confluences of interest within families, couples, friendships, and rivalries that are the drivers of plot.

And as with politics, the advent of data science applied to books, periodicals, correspondence, and musical scores holds the promise for an expansive new “digital humanities.” The possibilities for theory and discovery are limited only by the imagination and include the origin and spread of ideas, networks of intellectual and artistic influence, the persistence of historical memory, the waxing and waning of themes in literature, and patterns of unofficial censorship and taboo.

Nonetheless, many humanities scholars have reacted to these opportunities like the protagonist of the grammar-book example of the volitional future tense: “I will drown; no one shall save me.” Noting that these analyses flatten the richness of individual works, they reach for the usual adjectives: simplistic, reductionist, naïve, vulgar, and of course, scientistic.

The complaint about simplification is misbegotten. To explain something is to subsume it under more general principles, which always entails a degree of simplification. Yet to simplify is not to be simplistic. An appreciation of the particulars of a work can co-exist with explanations at many other levels, from the personality of an author to the cultural milieu, the faculties of human nature, and the laws governing social beings. The rejection of a search for general trends and principles calls to mind Jorge Luis Borges’s fictitious empire in which “the Cartographers Guild drew a map of the Empire whose size was that of the Empire, coinciding point for point with it. The following Generations ... saw the vast Map to be Useless and permitted it to decay and fray under the Sun and winters.”

And the critics should be careful with the adjectives. If anything is naïve and simplistic, it is the conviction that the legacy silos of academia should be fortified and that we should be forever content with current ways of making sense of the world. Surely our conceptions of politics, culture, and morality have much to learn from our best understanding of the physical universe and of our makeup as a species.

Steven Pinker is a contributing editor at The New Republic, the Johnstone Family Professor of Psychology at Harvard University, and the author, most recently, of The Better Angels of our Nature: Why Violence Has Declined.

Friday, March 15, 2013

Paul Horwich - Was Wittgenstein Right?


Following up on the previous post, a film biography of Wittgenstein by Derek Jarman, this article by Paul Horwich at the New York Times philosophy column, The Stone, looks at Wittgenstein's conception of the essential problems of philosophy, and his claims that,
there are no realms of phenomena whose study is the special business of a philosopher, and about which he or she should devise profound a priori theories and sophisticated supporting arguments. There are no startling discoveries to be made of facts, not open to the methods of science, yet accessible “from the armchair” through some blend of intuition, pure reason and conceptual analysis. Indeed the whole idea of a subject that could yield such results is based on confusion and wishful thinking. 
This attitude is in stark opposition to the traditional view, which continues to prevail. Philosophy is respected, even exalted, for its promise to provide fundamental insights into the human condition and the ultimate character of the universe, leading to vital conclusions about how we are to arrange our lives. It’s taken for granted that there is deep understanding to be obtained of the nature of consciousness, of how knowledge of the external world is possible, of whether our decisions can be truly free, of the structure of any just society, and so on — and that philosophy’s job is to provide such understanding.
These are not popular views - and Wittgenstein has definitely fallen out of favor, despite having been named in one poll as the most important philosopher of the 20th Century.

NOTE: A response to this post by Michael P. Lynch, Of Flies and Philosophers: Wittgenstein and Philosophy, was published in The Stone later that week.

Was Wittgenstein Right?

By PAUL HORWICH
March 3, 2013


The singular achievement of the controversial early 20th century philosopher Ludwig Wittgenstein was to have discerned the true nature of Western philosophy — what is special about its problems, where they come from, how they should and should not be addressed, and what can and cannot be accomplished by grappling with them. The uniquely insightful answers provided to these meta-questions are what give his treatments of specific issues within the subject — concerning language, experience, knowledge, mathematics, art and religion among them — a power of illumination that cannot be found in the work of others.

Admittedly, few would agree with this rosy assessment — certainly not many professional philosophers. Apart from a small and ignored clique of hard-core supporters the usual view these days is that his writing is self-indulgently obscure and that behind the catchy slogans there is little of intellectual value. But this dismissal disguises what is pretty clearly the real cause of Wittgenstein’s unpopularity within departments of philosophy: namely, his thoroughgoing rejection of the subject as traditionally and currently practiced; his insistence that it can’t give us the kind of knowledge generally regarded as its raison d’être.

Wittgenstein claims that there are no realms of phenomena whose study is the special business of a philosopher, and about which he or she should devise profound a priori theories and sophisticated supporting arguments. There are no startling discoveries to be made of facts, not open to the methods of science, yet accessible “from the armchair” through some blend of intuition, pure reason and conceptual analysis. Indeed the whole idea of a subject that could yield such results is based on confusion and wishful thinking.

Free Press, Ludwig Wittgenstein

This attitude is in stark opposition to the traditional view, which continues to prevail. Philosophy is respected, even exalted, for its promise to provide fundamental insights into the human condition and the ultimate character of the universe, leading to vital conclusions about how we are to arrange our lives. It’s taken for granted that there is deep understanding to be obtained of the nature of consciousness, of how knowledge of the external world is possible, of whether our decisions can be truly free, of the structure of any just society, and so on — and that philosophy’s job is to provide such understanding. Isn’t that why we are so fascinated by it?

If so, then we are duped and bound to be disappointed, says Wittgenstein. For these are mere pseudo-problems, the misbegotten products of linguistic illusion and muddled thinking. So it should be entirely unsurprising that the “philosophy” aiming to solve them has been marked by perennial controversy and lack of decisive progress — by an embarrassing failure, after over 2000 years, to settle any of its central issues. Therefore traditional philosophical theorizing must give way to a painstaking identification of its tempting but misguided presuppositions and an understanding of how we ever came to regard them as legitimate. But in that case, he asks, “[w]here does [our] investigation get its importance from, since it seems only to destroy everything interesting, that is, all that is great and important? (As it were all the buildings, leaving behind only bits of stone and rubble)” — and answers that “(w)hat we are destroying is nothing but houses of cards and we are clearing up the ground of language on which they stand.”

Associated Press, Bertrand Russell, one of Wittgenstein’s early teachers, at his home in London in 1962.

Given this extreme pessimism about the potential of philosophy — perhaps tantamount to a denial that there is such a subject — it is hardly surprising that “Wittgenstein” is uttered with a curl of the lip in most philosophical circles. For who likes to be told that his or her life’s work is confused and pointless? Thus, even Bertrand Russell, his early teacher and enthusiastic supporter, was eventually led to complain peevishly that Wittgenstein seems to have “grown tired of serious thinking and invented a doctrine which would make such an activity unnecessary.”

But what is that notorious doctrine, and can it be defended? We might boil it down to four related claims.

The first is that traditional philosophy is scientistic: its primary goals, which are to arrive at simple, general principles, to uncover profound explanations, and to correct naïve opinions, are taken from the sciences. And this is undoubtedly the case.

The second is that the non-empirical (“armchair”) character of philosophical investigation — its focus on conceptual truth — is in tension with those goals. That’s because our concepts exhibit a highly theory-resistant complexity and variability. They evolved, not for the sake of science and its objectives, but rather in order to cater to the interacting contingencies of our nature, our culture, our environment, our communicative needs and our other purposes. As a consequence the commitments defining individual concepts are rarely simple or determinate, and differ dramatically from one concept to another. Moreover, it is not possible (as it is within empirical domains) to accommodate superficial complexity by means of simple principles at a more basic (e.g. microscopic) level.

The third main claim of Wittgenstein’s metaphilosophy — an immediate consequence of the first two — is that traditional philosophy is necessarily pervaded with oversimplification; analogies are unreasonably inflated; exceptions to simple regularities are wrongly dismissed.

Therefore — the fourth claim — a decent approach to the subject must avoid theory-construction and instead be merely “therapeutic,” confined to exposing the irrational assumptions on which theory-oriented investigations are based and the irrational conclusions to which they lead.

Consider, for instance, the paradigmatically philosophical question: “What is truth?”. This provokes perplexity because, on the one hand, it demands an answer of the form, “Truth is such–and-such,” but on the other hand, despite hundreds of years of looking, no acceptable answer of that kind has ever been found. We’ve tried truth as “correspondence with the facts,” as “provability,” as “practical utility,” and as “stable consensus”; but all turned out to be defective in one way or another — either circular or subject to counterexamples. Reactions to this impasse have included a variety of theoretical proposals. Some philosophers have been led to deny that there is such a thing as absolute truth. Some have maintained (insisting on one of the above definitions) that although truth exists, it lacks certain features that are ordinarily attributed to it — for example, that the truth may sometimes be impossible to discover. Some have inferred that truth is intrinsically paradoxical and essentially incomprehensible. And others persist in the attempt to devise a definition that will fit all the intuitive data.

But from Wittgenstein’s perspective each of the first three of these strategies rides roughshod over our fundamental convictions about truth, and the fourth is highly unlikely to succeed. Instead we should begin, he thinks, by recognizing (as mentioned above) that our various concepts play very different roles in our cognitive economy and (correspondingly) are governed by defining principles of very different kinds. Therefore, it was always a mistake to extrapolate from the fact that empirical concepts, such as red or magnetic oralive stand for properties with specifiable underlying natures to the presumption that the notion of truth must stand for some such property as well.

Wittgenstein’s conceptual pluralism positions us to recognize that notion’s idiosyncratic function, and to infer that truth itself will not be reducible to anything more basic. More specifically, we can see that the concept’s function in our cognitive economy is merely to serve as a device of generalization. It enables us to say such things as “Einstein’s last words were true,” and not be stuck with “If Einstein’s last words were that E=mc2, then E=mc2; and if his last words were that nuclear weapons should be banned, then nuclear weapons should be banned; … and so on,” which has the disadvantage of being infinitely long! Similarly we can use it to say: “We should want our beliefs to be true” (instead of struggling with “We should want that if we believe that E=mc2, then E=mc2; and that if we believe … etc.”). We can see, also, that this sort of utility depends upon nothing more than the fact that the attribution of truth to a statement is obviously equivalent to the statement itself — for example, “It’s true that E=mc2” is equivalent to “E=mc2”. Thus possession of the concept of truth appears to consist in an appreciation of that triviality, rather than a mastery of any explicit definition. The traditional search for such an account (or for some other form of reductive analysis) was a wild-goose chase, a pseudo-problem. Truth emerges as exceptionally unprofound and as exceptionally unmysterious.

This example illustrates the key components of Wittgenstein’s metaphilosophy, and suggests how to flesh them out a little further. Philosophical problems typically arise from the clash between the inevitably idiosyncratic features of special-purpose concepts —true, good, object, person, now, necessary — and the scientistically driven insistence upon uniformity. Moreover, the various kinds of theoretical move designed to resolve such conflicts (forms of skepticism, revisionism, mysterianism and conservative systematization) are not only irrational, but unmotivated.The paradoxes to which they respond should instead be resolved merely by coming to appreciate the mistakes of perverse overgeneralization from which they arose. And the fundamental source of this irrationality is scientism.

As Wittgenstein put it in the “The Blue Book”:
Our craving for generality has [as one] source … our preoccupation with the method of science. I mean the method of reducing the explanation of natural phenomena to the smallest possible number of primitive natural laws; and, in mathematics, of unifying the treatment of different topics by using a generalization. Philosophers constantly see the method of science before their eyes, and are irresistibly tempted to ask and answer in the way science does. This tendency is the real source of metaphysics, and leads the philosopher into complete darkness. I want to say here that it can never be our job to reduce anything to anything, or to explain anything. Philosophy really is “purely descriptive.
These radical ideas are not obviously correct, and may on close scrutiny turn out to be wrong. But they deserve to receive that scrutiny — to be taken much more seriously than they are. Yes, most of us have been interested in philosophy only because of its promise to deliver precisely the sort of theoretical insights that Wittgenstein argues are illusory. But such hopes are no defense against his critique. Besides, if he turns out to be right, satisfaction enough may surely be found in what we still can get — clarity, demystification and truth.




Paul Horwich is a professor of philosophy at New York University. He is the author of several books, including “Reflections on Meaning,” “Truth-Meaning-Reality,” and most recently, “Wittgenstein’s Metaphilosophy.”
The Stone features the writing of contemporary philosophers on issues both timely and timeless. The series moderator is Simon Critchley. He teaches philosophy at The New School for Social Research in New York. To contact the editors of The Stone, send an e-mail to opinionator@nytimes.com. Please include “The Stone” in the subject field.

Thursday, August 16, 2012

Arthur Dobrin - 13 Ways of Looking at Compassion


From his Psychology Today blog, Am I Right? How to live ethically, Arthur Dobrin riffs on some ways we can look at compassion, coming up with 13 (my favorite number).

Humanity is realized in compassion and justice.


1. You are part of society.
Therefore, happiness is created through compassionate living.

2. Compassion is part of human nature.
Cultivate the compassionate part of yourself and connect to others.

3. Reason is part of human nature.
Cultivate the reasonable part of yourself so you make sound decisions.

4. Compassion is part of human nature. Reason is part of human nature.
Therefore, make no separation between heart and mind. Emotion and reason, feeling and thought — all that is human.

5. Compassion and thoughtfulness lead to good deeds.

6. You realize your humanity in compassion and justice.

7. A corrupt heart rattles with greed.

8. You may reside in another's heart even at a distance.
The greater reward is hearts in close contact.

9. If your heart is acquisitive, you will lead a life of disappointment.
If your heart is generous, your life will be satisfied.

10. Listen with an open mind. 
Listen with an open heart. Let the other in. This is the way of respect.

11. A closed fist can't offer a helping hand. 
A closed heart can't find fellow-feeling.

12. You cannot know peace if your heart if full of bitterness.

13. A fearful heart cannot feel compassion.