Showing posts with label intuition. Show all posts
Showing posts with label intuition. Show all posts

Sunday, February 23, 2014

Omnivore - The Path of Philosophy

From Bookforum's Omnivore blog, here is another fine collection of links inspired and related to philosophy.

The path of philosophy
Feb 21 2014 
9:00AM

Monday, February 17, 2014

Rediscovering the Unconscious with Daniel Kahneman

Princeton psychologist and 2002 Nobel Prize winner Daniel Khaneman recently was in conversation with Walmsley University (Harvard law School) Professor Cass Sunstein at Harvard University. I haven't found any video of it, but here is a summary from Harvard magazine. Kahneman is the author most recently of Thinking, Fast and Slow (2011); Sunstein is co-author of Nudge: Improving Decisions About Health, Wealth, and Happiness (2009) and more recently is the author of Simpler: The Future of Government (2013)

Rediscovering the Unconscious


From left: Daniel Kahneman and Cass Sunstein

Posted 2.3.14

Intuition, happiness, and memory were the topics on the table Monday night as Princeton psychologist Daniel Kahneman, LL.D. ’04, discussed his work on human judgment and cognitive bias, which won him a share of the 2002 Nobel Prize in economics. In conversation with Walmsley University Professor Cass Sunstein, Kahneman considered implications of and modifications to his theory of heuristics—the trial-and-error process of learning—popularized in his 2011 bestseller, Thinking, Fast and Slow.

The work began, said Kahneman, as “a series of accidents.” At Hebrew University of Jerusalem, fellow researcher Amos Tversky told Kahneman about some research on decisionmaking that suggested human behavior was reasonably rational, displaying a good intuitive grasp of statistics—a conclusion that seemed to contradict Kahneman’s own experiences. The ensuing discussion led to a lengthy collaboration, as the two delved into the realities of how people assign probabilities to events and think when facing uncertainty. That research would prove formative for the modern field of behavioral economics. (Tversky, who died in 1996, was ineligible to share the Nobel Prize.)

“When you try to answer a question,” said Kahneman, “you sometimes answer a different question.” In a seminal 1979 paper, he and Tversky described a series of experiments that questioned the classical economic assumption of “homo economicus,” a rational actor motivated by self-interest. In its place, they defined what they termed prospect theory, a description of the mental shortcuts, or heuristics, that guide people’s everyday decisions, as well as the systematic biases that could result from them. “A heuristic,” Kahneman explained, “is just answering a difficult question by answering an easy one.” When asked, for instance, the number of divorces at one’s university, one might substitute the question of how easy it is to think of examples of divorces, a heuristic Kahneman and Tversky dubbed “availability.” “Evaluation happens in a fraction of a second,” Kahneman said. Reflecting on this theory’s place in the history of psychology, he noted, “In the last 20 years, [psychologists] have rediscovered the unconscious…but it didn’t come from Freud. It came from experimental psychology.”

Kahneman added several new heuristics to the original three—availability, representativeness, and framing—that he and Tversky had initially defined. The “affect” heuristic, a measure of emotion, is central to how people make decisions regarding issues like genetically modified organisms or endangered wildlife: emotions circumvent rational cost-benefit analysis. He also called into question the “framing” heuristic, which describes how people’s answers can be influenced by irrelevant numbers; for instance, he and Tversky had observed that subjects who were asked whether the tallest redwood was taller or shorter than 1,200 feet gave different answers when the reference point was changed to 80 feet. While acknowledging that the exact classification of the “framing” phenomenon was a matter of “inside baseball,” Kahneman said his own thinking had changed. He argued that framing was a more deliberate, strategic mechanism than the genuinely intuitive heuristics; it “doesn’t fit the description of substituting something for something else.”

Similar heuristics affect people’s perceptions of happiness, Kahneman continued. Rather than remembering the entirety of an experience, he said, people rate their overall happiness based on their moment of peak happiness and their emotional state at the experience’s end. Early on, he sought to explore this conundrum; he used “experience sampling” techniques developed in other fields to ask subjects to rate their happiness in real time—using, for instance, phone surveys asking, “How happy are you right now?”—rather than relying on retrospective evaluations. “I thought this was really the most important,” he recalled. “Whether people are happy in their life is more important than how happy they are when they think about their life.” But his opinion has changed; he now thinks that retrospective impressions also influence decisionmaking. “It turns out that when people make plans for the future, they’re not trying to maximize the quality of their [actual] experience,” he said. “In a way, when you think about the future, you’re maximizing the quality of your anticipated memories.”

In response to audience questions—several hundred people filled Harvard Business School’s Burden Auditorium to hear the psychologist speak—Kahneman affirmed his support for applying behavioral economics to public policy, an approach championed by Sunstein and University of Chicago economist Richard Thaler in their book Nudge: Improving Decisions about Health, Wealth, and Happiness. “We don’t know much,” Kahneman acknowledged, “but we know a few things, and we know a few things that can be good to use.”

In his own life, though, the scholar admitted that he is as prone to cognitive biases and errors of judgment as anyone else. “When we were studying biases and errors of judgment, we were studying our own,” he said. With humorous regret, he added, “I’ve been doing that for 25 years now, and I think I haven’t improved at all.”

Thursday, May 23, 2013

Daniel Dennett: Intuition Pumps and Other Tools for Thinking


Philosopher Daniel Dennett has a new book, Intuition Pumps And Other Tools for Thinking, Daniel C. Dennett is the Austin B. Fletcher Professor of Philosophy at Tufts University and the author of numerous books including Breaking the SpellDarwin's Dangerous Idea, and Consciousness Explained. And for the record, naming a book "consciousness explained" is more than a little arrogant, and ignorant. He explains nothing, and I am not sure he even understands what consciousness is . . . but I am biased against his materialist dogmatism.

Daniel Dennett: Intuition Pumps and Other Tools for Thinking

Published on May 22, 2013

Professor Dennett comes to Google to talk about his new book, Intuition Pumps and Other Tools for Thinking. Dennett deploys his thinking tools to gain traction on these thorny issues while offering readers insight into how and why each tool was built. Alongside well-known favorites like Occam's Razor and reductio ad absurdum lie thrilling descriptions of Dennett's own creations: Trapped in the Robot Control Room, Beware of the Prime Mammal, and The Wandering Two-Bitser. Ranging across disciplines as diverse as psychology, biology, computer science, and physics, Dennett's tools embrace in equal measure light-heartedness and accessibility as they welcome uninitiated and seasoned readers alike. As always, his goal remains to teach you how to "think reliably and even gracefully about really hard questions." 



Here is some bonus material for your reading pleasure, via Open Culture.

Philosopher Daniel Dennett Presents Seven Tools For Critical Thinking

May 21st, 2013


Love him or hate him, many of our readers may know enough about Daniel C. Dennett to have formed some opinion of his work. While Dennett can be a soft-spoken, jovial presence, he doesn’t suffer fuzzy thinking or banal platitudes— what he calls “deepities”—lightly. Whether he’s explaining (or explaining away) consciousness, religion, or free will, Dennett’s materialist philosophy leaves little-to-no room for mystical speculation or sentimentalism. So it should come as no surprise that his latest book, Intuition Pumps And Other Tools for Thinking, is a hard-headed how-to for cutting through common cognitive biases and logical fallacies.

In a recent Guardian article, Dennett excerpts seven tools for thinking from the new book. Having taught critical thinking and argumentation to undergraduates for years, I can say that his advice is pretty much standard fare of critical reasoning. But Dennett’s formulations are uniquely—and bluntly—his own. Below is a brief summary of his seven tools.

1. Use Your Mistakes

Dennett’s first tool recommends rigorous intellectual honesty, self-scrutiny, and trial and error. In typical fashion, he puts it this way: “when you make a mistake, you should learn to take a deep breath, grit your teeth and then examine your own recollections of the mistake as ruthlessly and as dispassionately as you can manage.” This tool is a close relative of the scientific method, in which every error offers an opportunity to learn, rather than a chance to mope and grumble.

2. Respect Your Opponent

Often known as reading in “good faith” or “being charitable,” this second point is as much a rhetorical as a logical tool, since the essence of persuasion involves getting people to actually listen to you. And they won’t if you’re overly nitpicky, pedantic, mean-spirited, hasty, or unfair. As Dennett puts it, “your targets will be a receptive audience for your criticism: you have already shown that you understand their positions as well as they do, and have demonstrated good judgment.”

3. The “Surely” Klaxon

A “Klaxon” is a loud, electric horn—such as a car horn—an urgent warning. In this point, Dennett asks us to treat the word “surely” as a rhetorical warning sign that an author of an argumentative essay has stated an “ill-examined ‘truism’” without offering sufficient reason or evidence, hoping the reader will quickly agree and move on. While this is not always the case, writes Dennett, such verbiage often signals a weak point in an argument, since these words would not be necessary if the author, and reader, really could be “sure.”

4. Answer Rhetorical Questions

Like the use of “surely,” a rhetorical question can be a substitute for thinking. While rhetorical questions depend on the sense that “the answer is so obvious that you’d be embarrassed to answer it,” Dennett recommends doing so anyway. He illustrates the point with a Peanuts cartoon: “Charlie Brown had just asked, rhetorically: ‘Who’s to say what is right and wrong here?’ and Lucy responded, in the next panel: ‘I will.’” Lucy’s answer “surely” caught Charlie Brown off-guard. And if he were engaged in genuine philosophical debate, it would force him to re-examine his assumptions.

5. Employ Occam’s Razor

The 14th-century English philosopher William of Occam lent his name to this principle, which previously went by the name of lex parsimonious, or the law of parsimony. Dennett summarizes it this way: “The idea is straightforward: don’t concoct a complicated, extravagant theory if you’ve got a simpler one (containing fewer ingredients, fewer entities) that handles the phenomenon just as well.”

6. Don’t Waste Your Time on Rubbish

Displaying characteristic gruffness in his summary, Dennett’s sixth point expounds “Sturgeon’s law,” which states that roughly “90% of everything is crap.” While he concedes this may be an exaggeration, the point is that there’s no point in wasting your time on arguments that simply aren’t any good, even, or especially, for the sake of ideological axe-grinding.

7. Beware of Deepities

Dennett saves for last one of his favorite boogeymen, the “deepity,” a term he takes from computer scientist Joseph Weizenbaum. A deepity is “a proposition that seems both important and true—and profound—but that achieves this effect by being ambiguous.” Here is where Dennett’s devotion to clarity at all costs tends to split his readers into two camps. Some think his drive for precision is an admirable analytic ethic; some think he manifests an unfair bias against the language of metaphysicians, mystics, theologians, continental and post-modern philosophers, and maybe even poets. Who am I to decide? (Don’t answer that).

You’ll have to make up your own mind about whether Dennett’s last rule applies in all cases, but his first six can’t be beat when it comes to critically vetting the myriad claims routinely vying for our attention and agreement.

via Mefi

Related Content:

Josh Jones is a writer and musician based in Washington, DC. Follow him @jdmagness

Friday, May 17, 2013

Explainer: What Is Intuition?

From a site called Science Alert, this is a cool overview article on what we know about intuition and how to use it (as well as when).

Explainer: What Is Intuition?


BEN NEWELL, UNSW
FRIDAY, 03 MAY 2013


Whether or not intuition is inherently “good” depends on the situation. 
Image: tlfurrer/Shutterstock

The word intuition is derived from the Latin intueor– to see; intuition is thus often invoked to explain how the mind can “see” answers to problems or decisions in the absence of explicit reasoning – a “gut reaction”.

Several recent popular psychology books – such as Malcolm Gladwell’s Blink, Daniel Kahneman’s Thinking Fast and Slow and Jonah Lehrer’s The Decisive Moment – have emphasised this “power of intuition” and our ability to “think without thinking”, sometimes suggesting we should rely more heavily on intuition than deliberative (slow) or “rational” thought processes.

Such books also argue that most of the time we act intuitively – that is, without knowing why we do things we do.

But what is the evidence for these claims? And what is intuition anyway? 

Defining intuition


Albert Einstein once noted “intuition is nothing but the outcome of earlier intellectual experience”. In a similar vein, the American psychologist Herbert A. Simon (a fellow Nobel Laureate) stated that intuition was “nothing more and nothing less than recognition”.

These definitions are very useful because they remind us that intuition need not refer to some magical process by which answers pop into our minds from thin air or from deep within the unconscious.

On the contrary: intuitive decisions are often a product of previous intense and/or extensive explicit thinking.

Such decisions may appear subjectively fast and effortless because they are made on the basis of recognition.

As a simple example, consider the decision to take an umbrella when you leave for work in the morning. A quick glance at the sky can provide a cue (such as portentous clouds); the cue gives us access to information stored in memory (rain is likely); and this information provides an answer (take an umbrella).

When such cues are not so readily apparent, or information in memory is either absent or more difficult to access, our decisions shift to become more deliberative.

Those two extremes are associated with different experiences. Deliberative thought yields awareness of intermediate steps in a chain of thought, and of effortful combination of information.

Intuitive thought lacks awareness of intermediate cognitive steps (because there aren’t any) and does not feel effortful (because the cues trigger the response). But intuition is characterised by feelings of familiarity and fluency.


Is intuition any good?


Whether or not intuition is inherently “good” really depends on the situation.

Herbert A. Simon’s view that “intuition is recognition” was based on work describing the performance of chess experts.

Work by the Dutch psychologist Adriaan De Groot, and later by Simon and the psychologist William G Chase, demonstrated that a signature of chess expertise is the ability to identify promising moves very rapidly.

That ability is achieved via immediate “pattern matching” against memories of up to 100,000 different game positions to determine the next best move.

Novices, in contrast, don’t have access to these memories and thus have to work through the possible contingencies of each move.

This line of research led to investigations of experts in other fields and the development of what has become known as recognition-primed decision making.

Work by the research psychologist Gary A Klein and colleagues concluded that fire-fighters can make rapid “inutitive” decisions about how a fire might spread through a building because they can access a repertoire of prior similar experiences and run mental simulations of potential outcomes.

Thus in these kinds of situations, where we have lots of prior experience to draw on, rapid, intuitive decisions can be very good.

But intuition can also be misleading.

In a contrasting body of work, decision psychologist Daniel Kahneman (yet another Nobel Laureate) illustrated the flaws inherent in an over-reliance on intuition.

To illustrate such an error, he considered this simple problem: If a bat and a ball cost $1.10 in total and the bat costs $1 more than the ball, how much does the ball cost?

If you are like many people, your immediate – intuitive (?) – answer would be “10 cents”. The total readily separates into a $1 and 10 cents, and 10 cents seems like a plausible amount.

But a little more thinking reveals that this intuitive answer is wrong. If the ball cost 10 cents the bat would have to be $1.10 and the total would be $1.20! So the ball must cost 5 cents.

So why does intuition lead us astray in this example? Because here intuition is not based on skilled recognition, but rather on simple associations that come to mind readily (i.e., the association between the $1 and the 10 cents).

Kahneman and Tversky famously argued these simple associations are relied upon because we often like to use heuristics, or shortcuts, that make thinking easier.

In many cases these heuristics will work well but if their use goes “unchecked” by more deliberative thinking, errors – such as the 10 cents answer – will occur.

Using intuition adaptively


The take-home message from the psychological study of intuition is that we need to exercise caution and attempt to use intuition adaptively.

When we are in situations we have experienced lots of times (such as making judgements about the weather), intuition – or rapid recognition of relevant “cues” – can be a good guide.

But if we find ourselves in novel territory or in situations in which valid cues are hard to come by (such as stock market predictions), relying on our “gut” may not be wise.

Our inherent tendency to get away with the minimum amount of thinking could lead to slip-ups in our reasoning.


~ Ben Newell receives funding from the Australian Research Council.

Editor's Note: This article was originally published by The Conversation, here, and is licensed as Public Domain under Creative Commons. See Creative Commons - Attribution Licence.

Saturday, May 11, 2013

New Books from Daniel Dennett and Douglas Hofstadter

Daniel Dennett's new book is Intuition Pumps and Other Tools for Thinking and Douglas Hofstadter's new book (written with French psychologist Emmanuel Sander) is Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. Eric Banks from Bookforum reviews the two new books.

This Is Your Brain, On

Two books seek to explain how our minds work their way through the maze of consciousness


ERIC BANKS





YOU DON’T HAVE TO CONDUCT A THOUGHT EXPERIMENT to see why some philosophers or scientists want to write for an audience cheerfully indifferent to the ways of the seminar room and the strictures of the refereed journal. Beyond the fame and fortune, perhaps more important is the sense that if one’s work is worth doing at all, it ought to reach the widest possible audience, particularly when it bears on issues (religion, free will) with decisive implications for how readers choose to live. Some, I imagine, also relish the bonus frisson of mixing it up in the rowdy rough-and-tumble of the public arena. If you’re like Daniel C. Dennett—one of whose many mantras is Gore Vidal’s “It is not enough to succeed. Others must fail”—what’s the point of felling a philosophical tree if there’s no one to hear it? Since the publication in 1991 of his book Consciousness Explained, Dennett has gladly risen to the challenge, merrily taking on all comers left and right, in works that play to a packed house most philosophers couldn’t dream of.

For Dennett, moreover, the experience of communicating to a broad readership his brawny materialist agenda, which aims at nothing less than squaring philosophy with a host of other fields—cognitive psychology, brain science, evolutionary biology—has an ancillary and less obvious boon. Specialists, he writes, tend to underexplain to one another the very terms of their discussions. These experts benefit from translating their respective positions down, as it were, so that they might be presented to “curious nonexperts,” as Dennett puts it in his newest book, Intuition Pumps and Other Tools for Thinking. They will be forced to think anew, and paradoxically to think harder: “To explain their position under these conditions helps them find better ways of making their points than they had ever found before.”

The notion that an idea or “position” might get fine-tuned just as neatly in the imagined company of a well-intentioned fast learner as it would among scholarly peers is ingrained in Dennett’s go-go style of doing philosophy and its winner-take-all stakes. As set out in Intuition Pumps, his narrative approach, with its flurry of catchy neologisms, plain-talk prose, and gotcha argument stoppers, will prove as roundly appealing to some as it will seem pandering, I suppose, to others. The pep-talk jocularity and the shoot-from-the-hip posture of its presentation, familiar enough throughout Dennett’s writing, make it seem as if the book—which is focused less on a single subject than on a kind of survey of Dennett’s greatest hits (from positions on consciousness to free will, from “intentional stances” to “competence without comprehension”)—imagines its reader as a slightly nerdy college kid with high math SATs who, with just the right writerly nudge, might be tempted to jump majors. There’s no accounting for tastes, but the odd recipe Dennett produces here—one part avuncular guide (who dubs himself “Uncle Dan” early on), one part pugnacious tough guy—makes for a weird slaw. Picture a helpful Burl Ives crossed with a philosophical Robert Conrad from the old commercials for Eveready batteries, just taunting anybody to go ahead, try and knock this position off—I dare you.

Dennett declares that his aim in Intuition Pumps is to lay out devices by which we might think more clearly, or with more insight, about a host of thorny topics—which might be boiled down to those many areas in which we errantly or too hastily assume we have a solid sense of the right and wrong answers. The sheer number of these thought experiments, geared to reveal how thoroughly incorrect our assumptions might be, is daring itself. The most provocative comprise consciousness, free will, and our own sense of what we mean by meaning and intend by speaking of intentionality—in other words, the philosophical terrain Dennett has explored extensively in his prior books. (A glance at the notes reveals how vastly Intuition Pumps recycles material and arguments he has used or made in previously published work, extending as far back as 1969.)

The Karnak Temple, Thebes, Egypt, and the Grand Canyon, Arizona.

Part of Dennett’s role in Intuition Pumps is to serve as a kind of design engineer. With the concept of “intuition pump,” he repurposes the thought experiment—a form of argumentation of ancient and venerable purpose in philosophy (and in sundry other disciplines, especially physics)—in order to transform its somewhat neutral-sounding disposition into a power tool, one that answers to a basic question: Is it well or poorly designed to get the job done? (The interesting question of the epistemological standing of thought experiments in philosophy is never really given much attention here.) First rechristened as “intuition pumps” in The Mind’s I, the hybrid work Dennett co-produced in 1981 with his friend Douglas Hofstadter, these narrative devices can condense, in a straightforward way, a complex set of propositions and suppositions into an imaginable story that summarizes or illustrates a position. Hence their extreme popularity in the history of philosophy, from Plato’s cave to Parfit’s amoeba. They can be positive or critical, launching a new idea or yanking the rug out from under someone else’s pet position (or even both). Either way, such thought experiments are designed to jolt the hearer’s or reader’s sense of intuition (hence the idea of the “pump” that paradoxically strands the reader’s analogic mind in an awkward, ill-specified locution) and channel it in certain indubitable directions. To mix the metaphor, intuition pumps are thus double-edged: They can accomplish a lot, and produce a kind of free analogue of the costly experiments carried out in laboratories, but they can also carry a heavy cost for a thinker when they are dubiously or even dangerously built. The lesson that Dennett hopes the reader might take away is that we must remain wary of intuition pumps, taking them apart to find their hidden biases and built-in assumptions, before we let ourselves be overinflated by them.

I lost track of just how many intuition pumps Dennett ticks off in the book, but taken as a whole they offer a sense that philosophy is a fabulous field if your career in sci-fi fails to take off. A menagerie of mindless robots, devious neurosurgeons implanting replica brain cells in their unknowing subjects, parallel worlds with exactly a single element changed from ours, swamp men transformed by strikes of lightning into brain-duplicating doppelgängers: Almost all these personae and philosophical fables will be familiar to anyone who has followed the rough course of the philosophy of mind for the past several decades. Some of the signal thought experiments devised by philosophers since the early 1970s are present: Mary the color scientist, who emerges from a black-and-white world; John Searle’s Chinese Room experiment, the response he devised more than thirty years ago to challenge an argument made in 1950 by artificial-intelligence pioneer Alan Turing. If a computer could pass for a human being to an interlocutor who wasn’t aware he or she was communicating with a computer, Turing held, the machine could be said to possess intelligence. As familiar as Searle’s Chinese Room experiment may be, it is worth lingering on it, since it illustrates what is ultimately so frustrating about Intuition Pumps. To quote Searle’s summary of the experiment:
Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.
Dennett famously objected in The Mind’s I, as he does here, that Searle’s experiment pumps the wrong intuitions—among other things, it massively misrepresents what it means to “manipulate the symbols” or to “understand.” The example is undercooked and unnourishing, and if Searle had known anything about computing, he couldn’t in good faith have constructed it as he did. Therefore it is a flawed intuition pump: “It persuades by clouding our imagination,” Dennett writes, “not exploiting it well.”

But what is the difference between a good intuition pump and a flawed one? The Chinese Room has spawned scores of counter–thought experiments, replicating itself in many variations of structure and content; by the mid-’90s, Steven Pinker commented that it had become the source of at least a hundred papers. It has allowed articulations of positions from a vast number of academic fields, from proponents of AI to linguists, and generated commentary on semantics and syntax, intentionality and consciousness, and evolution. Sounds like a pretty fecund little tool for thinking to me! But for the budding philosophy student reading Intuition Pumps, Dennett reserves the right to select the hammer and pick the gauge of nail. “Here I am concentrating on the thinking tool itself, not the theories and propositions it was aimed at,” Dennett hopefully backtracks. But what good is it to present this book as a collection of helpful “tools for thinking” when it turns out the only successful tools happen to run on precisely the same voltage as Dennett’s own particular theories and propositions?

Intuition Pumps is valuable in providing an overview of a body of recent work in the philosophy of mind, but it suffers as well from Dennett’s penchant for cleverness—no more egregiously than in his soi-disant playfulness in mapping nasty flaws on his favorite intellectual targets, like Stephen Jay Gould. It grows tiresome and tacky: He returns to a long-ago pissing match with Gould to discuss rhetorical sleights of hand, and even coins a new word to describe the tendency to advance straw-man arguments and false dichotomies—“Goulding.” How is that a better “thinking tool”? He mocks philosopher Ned Block’s use of the word surely as a sure sign of a mental block (get it?) and offers up “Occam’s Broom” as an example of an argument that sweeps inconvenient facts under the rug, and circularly, not to mention condescendingly, takes the opportunity to chide Thomas Nagel for not consulting “the experts” on evolutionary biology. (At least he doesn’t call this oversight a “Nageling.”) All this sour score-settling with Dennett’s philosophical peers is infinitely less witty than I imagine he takes it to be. But in the spirit of Dennett’s tactic, I’d offer one historical vignette that characterizes his frequent summoning of an army of scientists at his back, with an arsenal of cutting-edge knowledge about our chemistry and biology, ready at some later date to vindicate his positions, and call that future-perfect feint a Ledru-Rollin. That would be in honor of the hectoring French propagandist of 1848 who famously bellowed, “There go my people. I must follow them, for I am their leader.”

Intuition Pumps at least has the benefit of tasking us with thinking about what we do when we intuit a given set of problems. Working with and against intuitions is a strategy that also permeates Surfaces and Essences: Analogy as the Fuel and Fire of Thinking, a book-length thought experiment published by French psychologist Emmanuel Sander and Dennett’s old partner, Douglas Hofstadter, probably still best known for his 1979 Pulitzer Prize–winning gift to high-SAT-math kids everywhere, Gödel, Escher, Bach: An Eternal Golden Braid. The new book brings together a laundry list of often laborious found examples, culled by the authors from their daily experiences and overheard talk, in order to tease out the logical paradoxes and contradictions of analogy, which, they contend, forms something like the essence of cognition. If that sounds like a sweeping claim, it is because in their view—as robustly flushed out in Surfaces and Essences—the nature of analogical thinking is vastly more complex than our folk understanding of it. The book’s argument may be stated fairly simply: When we as subjects attempt to make sense of any phenomenal experience in the world, which we do at every waking moment, we do so through a kind of quick cognitive shorthand, forging analogies between the unknown and past experience, both consciously and unconsciously.

The magic of analogical thinking is its odd recursiveness, a plasticity that has long delighted Hofstadter and engaged his fascination with metalanguage and the brain-teasing conundrums of self-referential puzzles. Analogy as a rhetorical device is an almost endless source of such entanglements. What makes one analogy like another analogy? They’re both analogical. Their definition as analogies refers to, well, other analogies. From this angle, which is very much Hofstadter and Sander’s preferred vantage, there may be naive analogies, and there may be analogies that lead us into dangerous directions or that cause us to make poor judgments, but it’s hard to see what would count as a wrong analogy. As such, in Hofstadter and Sander’s view, analogies exemplify a form of mental mapping, tying various states of things together or announcing semiresemblances among different types of experiences as we pass through the world. This schema links analogical thought profoundly with perception. Analogies can be beacons of creative thought, but they can also be utterly banal—just as our perception can be at times. And as much as we try to control our ability to dazzle and amuse with new analogies, it’s more frequently the case that analogies have a hold over us—over our language, over our thought patterns.

Here’s a sort of example. I was recently trying to describe to someone the (to my mind) unusual fact of my dog—of a breed very highly marked as “American” (a coonhound)—spending several long weekends without me in rural France. What suddenly popped into my head was the analogy of Tom Ripley, played by Dennis Hopper, in Wim Wenders’s The American Friend (1977), a classic of the New German Cinema with an American actor occupying an unexpectedly European landscape. Where did that come from? But my mind did another turn, as odd perhaps as the first, and I blurted out the film as Fassbinder’s The American Soldier (1970), another “Americanism” of German film. A collision of analogies! It took some time for me to figure out the train of “A is to B” at play, but the important point is threefold. First, the mental linking, which felt unmotivated, no matter how “creative” the thought, was so rapid that it felt automatic. Second, I had this set of analogies primed by an experience that seemed to call forth a pseudocategory—experimental German films of the ’70s that, according to the logic of experimental German films of the ’70s, featured an American component (whether in their titles or their casting)—which it’s hard to imagine I might have stored somewhere as a useful “category,” years ago, just waiting for an analogical item to happen onto the scene. Finally, the multiple, linked frameworks involved (German films with Americans in them, films that have American in the title) are flexible enough that they could blend into one another to create in essence a makeshift, almost ad hoc new frame. (The downside of such conscious awareness on my part is that it’s hard not to look at the poor pooch now and not think of Dennis Hopper.)

Hofstadter and Sander’s book is a bottomless exploration of the potential of analogic thinking to eat away at any simple idea of how one thing is related to another. The authors pursue this problem by pondering analogies of all types and at ascending levels of abstraction, with lists that span pages. They posit that the logic of analogy brings together not just how two unrelated things get related by their likeness (A is to B as X is to Y), but how, on a different level of abstraction, we might analogize relationality itself. We make sense of a variety of situations, sometimes consciously, sometimes below the level of reflection, by thinking of them in terms of label-like proverbs or aphorisms or even fables: X situation is just like what we think of when we think of the experience of “sour grapes,” or of “have your cake and eat it too.” The abstraction that provides the “label” for an analogy may not even have a name: Hofstadter returns throughout the book to a recurring and evidently personally haunting example of a situation in which the act of gathering stray bottle caps while on a tour of the temple at Karnak is linked to a memory of his young son, during a family visit to the Grand Canyon, mesmerized by a formation of ants instead of the sublime view. There’s no convenient or pithy category label for what these two things share, though the manner in which Hofstadter processed the former experience, as he exhaustively argues, seems to depend on an ingrained version of analogical thinking.

This being a book with Douglas Hofstadter as an author, it will discursively scale the Karnak–Grand Canyon experience in the form of a poem written by a friend, plumb the loop-de-loop relationship between the analogy thus formed and the incipient category it instantiates, and construct a tower out of analogies that ensue from yet another . . . analogy. It will show how the ur-form of a particular analogy (“X is the Y of something”) will throw off endless variations (in one virtuoso list, Hofstadter and Sander offer found variations including the “Bill Gates of wastewater,” the “Tiger Woods of user-generated video,” and the “Mussolini of mulligatawny”). Surfaces and Essences is a Hofstadterian machine of knot tying: With lists after lists, some virtuosic, some groan inspiring, of how to do things with analogy, it becomes clear that analogy and categorization are inseparable. We use analogies whenever we open our mouths—it should be obvious in the sentence above that “scale,” “plumb,” and “tower” are just the tip of the iceberg, so to speak, of how pernicious analogical language is on some basic level of communication—and Hofstadter and Sander eat up 590 densely printed pages thick with puns to make sure that we don’t miss the point. Surfaces and Essences caps itself with a twenty-five-page-long dialogue between two characters, Katy and Anna, who, like a pair of escapees from a play by Brecht, debate the positions taken in the book, with the dummkopf Katy arguing the bad view that categories form the core of cognition while the enlightened, analogically hip Anna proves the errors of her ways. (Wouldn’t you know it—every argument that Katy comes up with for her view about the primacy of categories as the basis for analogy turns on conceptualizing “categories” by way of various analogies. Like a spin on Monsieur Jourdain, she didn’t know she was speaking in analogies all along!)

Rhetorical strategies aside—insert metajoke here—you have to wonder whom this book was written for. It seems like a textbook, but there are no notes, no bibliography, no real sense of how Surfaces and Essences fits with or argues against (analogous?) work in psychology or cognitive science. It’s difficult, too, to gauge its urgency. The book feels like it could have been hatched three decades ago, around the time that George Lakoff and Mark Johnson wrote Metaphors We Live By, which raises many similar issues. Maybe that’s irrelevant, and the point is to disabuse us of a conceptual, context-free model of “dictionary meaning”—that meaning is a matter of discrete taxonomic categories like “mammal” or “sandwich” or “president,” all of which will by definition contain a set of necessary and sufficient features. Fair enough (although who really believes that?). But when all thought becomes “analogized” as analogy, if it, like Bertrand Russell’s tale of the turtles, is analogy all the way down, the explanatory value of “analogy” comes to seem tautological. This may be the way cognition “works,” but if there’s no other way to express it but, well, analogically—then it frankly seems a game of increasingly clever wordplay. To take a further step and refer to “analogy” as the core, that is, essence, of thought is to cast the analogy as hard fact, which seems no less reductive than the kind of conceptual models from which Hofstadter and Sander have labored mightily to rid us. The analogy I’m thinking of is “can’t have it both ways.”


~ Eric Banks is the former editor in chief of Bookforum and the former president of the National Book Critics Circle.

Sunday, September 30, 2012

BBC Documentary - How to Make Better Decisions


I found this at Open Culture - it's a great little documentary from BBC Horizon. The producers take a look inside the world of cognitive science to discovery how make decisions, and how we might do that better if understand how the brain works.



How to Make Better Decisions, a Thought-Provoking Documentary by the BBC

September 28th, 2012

This is the summary from Open Culture's post, and below is the one from the web site.
“In this program,” says narrator Peter Capaldi at the outset, “we’re going to show you how to be more rational, and deal with some of life’s biggest decisions.” It’s a pretty big claim, and you may doubt that it’s true (especially during the silly opening scene involving a group of nerds trying to score a date) but give this 2008 BBC Horizon program a little time and you might come away with a few things to think about. How to Make Better Decisions takes us inside cognitive science laboratories and out on the streets to demonstrate how the emotional part of our brain gets the better of the rational part. The film introduces a number of intriguing concepts, including Prospect Theory“the framing effect,” and “priming.” More controversially, it highlights some research that suggests the possibility that our intuition may have something to do with an ability to sense future events. How to Make Better Decisions is 49 minutes long, and we’ve decided to add it to our growing collection of Free Movies Online.

Related content:

This is the description with the video at YouTube.
According to science: We are bad at making decisions. Our decisions are based on oversimplification, laziness and prejudice. And that's assuming that we haven't already been hijacked by our surroundings or led astray by our subconscious!

Featuring exclusive footage of experiments that show how our choices can be confounded by temperature, warped by post-rationalisation and even manipulated by the future, Horizon presents a guide to better decision making, and introduces you to Mathematician Garth Sundem, who is convinced that conclusions can best be reached using simple maths and a pencil!

Tuesday, August 07, 2012

First Thought, Best Thought?

Chogyam Trungpa Rinpoche taught this idea, to not over think things. But is it always the best approach to decision making or anything else. Sometimes. Maybe.

This article is from  Sam McNerney at his Moments of Genius blog for Big Think.


Why The Future of Neuroscience Will Be Emotionless

Hello readers. I've been on vacation for the last several days. Here's an old post from my previous blog WhyWeReason.com to fill the void. It's about a paper by the NYU neuroscientists Joseph LeDoux, who argues that cognitive science needs to rethink how it understands emotions.  

In Phaedrus, Plato likens the mind to a charioteer who commands two horses, one that is irrational and crazed and another that is noble and of good stock. The job of the charioteer is to control the horses to proceed towards Enlightenment and the truth.

Plato’s allegory sparked an idea that perpetuated throughout the next several millennia in western thought: emotion gets in the way of reason. This makes sense to us. When people act out-of-order, they’re irrational. No one was ever accused of being too reasonable. Around the 17th and 18th centuries, however, thinkers began to challenge this idea. David Hume turned the tables on Plato: reason, Hume said, was the slave of the passions. Psychological research of the last few decades not only confirms this view, some of it suggests that emotion is better at deciding.

We know a lot more about how the brain works compared to the ancient Greeks, but a decade into the 21stcentury researchers are still debating which of Plato’s horses is in control, and which one we should listen to.

A couple of recent studies are shedding new light on this age-old discourse. The first comes from Michael Pham and his team at Columbia Business School. The researchers asked participants to make predictions about eight different outcomes ranging from American Idol finalists, to the winners of the 2008 Democratic primary, to the winner of the BCS championship game. They also forecasted the Dow Jones average.

Pham created two groups. He told the first group to go with their guts and the second to think it through. The results were telling. In the American Idol results, for example, the first group correctly predicted the winner 41 percent of the time whereas the second group was only correct 24 percent of the time. The high-trust-in-feeling subjects even predicted the stock market better.

Pham and his team conclude the following:
Results from eight studies show that individuals who had higher trust in their feelings were better able to predict the outcome of a wide variety of future events than individuals who had lower trust in their feelings…. The fact that this phenomenon was observed in eight different studies and with a variety of prediction contexts suggests that this emotional oracle effect is a reliable and generalizable phenomenon. In addition, the fact that the phenomenon was observed both when people were experimentally induced to trust or not trust their feelings and when their chronic tendency to trust or not trust their feelings was simply measured suggests that the findings are not due to any peculiarity of the main manipulation.
Does this mean we should always trust our intuition? It depends. A recent study by Maarten Bos and his team identified an important nuance when it comes to trusting our feelings. They asked one hundred and fifty-six students to abstain from eating or drinking (sans water) for three hours before the study. When they arrived Bos divided his participants into two groups: one that consumed a sugary can of 7-Up and another that drank a sugar-free drink.

After waiting a few minutes to let the sugar reach the brain the students assessed four cars and four jobs, each with 12 key aspects that made them more or less appealing (Bos designed the study so an optimal choice was clear so he could measure of how well they decided). Next, half of the subjects in each group spent four minutes either thinking about the jobs and cars (the conscious thought condition) or watching a wildlife film (to prevent them from consciously thinking about the jobs and cars).

Here’s the BPS Research Digest on the results:
For the participants with low sugar, their ratings were more astute if they were in the unconscious thought condition, distracted by the second nature film. By contrast, the participants who’d had the benefit of the sugar hit showed more astute ratings if they were in the conscious thought condition and had had the chance to think deliberately for four minutes. ‘We found that when we have enough energy, conscious deliberation enables us to make good decisions,’ the researchers said. ‘The unconscious on the other hand seems to operate fine with low energy.’
So go with your gut if your energy is low. Otherwise, listen to your rational horse.
Here’s where things get difficult. By now the debate over the role reason and emotion play in decision-making is well documented. Psychologists have written thousands of papers on the subject. It shows in the popular literature as well. From Antonio Damasio’s Descartes’ Error to Daniel Kahneman’s Thinking, Fast and Slow, the lay audience knows about both the power of thinking without thinking and their predictable irrationalities.

But what exactly is being debated? What do psychologists mean when they talk about emotion and reason? Joseph LeDoux, author of popular neuroscience books including The Emotional Brain and The Synaptic Self, recently published a paper in the journal Neuron that flips the whole debate on its head. “There is little consensus about what emotion is and how it differs from other aspects of mind and behavior, in spite of discussion and debate that dates back to the earliest days in modern biology and psychology.” Yes, what we call emotion roughly correlates with certain parts of the brain, it is usually associated with activity in the amygdala and other systems. But we might be playing a language game, and neuroscientists are reaching a point where an understanding of the brain requires more sophisticated language.

As LeDoux sees it, “If we don’t have an agreed-upon definition of emotion that allows us to say what emotion is… how can we study emotion in animals or humans, and how can we make comparisons between species?” The short answer, according to the NYU professor, is “we fake it.”

With this in mind LeDoux introduces a new term to replace emotion: survival circuits. Here’s how he explains it:
The survival circuit concept provides a conceptualization of an important set of phenomena that are often studied under the rubric of emotion—those phenomena that reflect circuits and functions that are conserved across mammals. Included are circuits responsible for defense, energy/nutrition management, fluid balance, thermoregulation, and procreation, among others. With this approach, key phenomena relevant to the topic of emotion can be accounted for without assuming that the phenomena in question are fundamentally the same or even similar to the phenomena people refer to when they use emotion words to characterize subjective emotional feelings (like feeling afraid, angry, or sad). This approach shifts the focus away from questions about whether emotions that humans consciously experience (feel) are also present in other mammals, and toward questions about the extent to which circuits and corresponding functions that are relevant to the field of emotion and that are present in other mammals are also present in humans. And by reassembling ideas about emotion, motivation, reinforcement, and arousal in the context of survival circuits, hypotheses emerge about how organisms negotiate behavioral interactions with the environment in process of dealing with challenges and opportunities in daily life.
Needless to say, LeDoux’s paper changes things. Because emotion is an unworkable term for science, neuroscientists and psychologists will have to understand the brain on new terms. And when it comes to the reason-emotion debate – which of Plato’s horses we should trust – they will have to rethink certain assumptions and claims. The difficult part is that we humans, by our very nature, cannot help but resort to folk psychology to explain the brain. We deploy terms like soul, intellect, reason, intuition and emotion but these words describe very little. Can we understand the brain even though our words may never suffice? The future of cognitive science might depend on it.

Saturday, June 23, 2012

Debunking the Myth of Intuition - Daniel Kahneman Interviewed in Spiegel

I thoroughly enjoyed Daniel Kahneman's most recent book, Thinking, Fast and Slow - in which he explains the two cognitive systems that shape the way we think: "System 1 is fast, intuitive, and emotional; System 2 is slower, more deliberative, and more logical." Great book, and this is an excellent interview in which you can get a taste of his ideas.


SPIEGEL Interview with Daniel Kahneman

Debunking the Myth of Intuition

Can doctors and investment advisers be trusted? And do we live more for experiences or memories? In a SPIEGEL interview, Nobel Prize-winning psychologist Daniel Kahneman discusses the innate weakness of human thought, deceptive memories and the misleading power of intuition.

SPIEGEL: Professor Kahneman, you've spent your entire professional life studying the snares in which human thought can become entrapped. For example, in your book, you describe how easy it is to increase a person's willingness to contribute money to the coffee fund.
 
Kahneman: You just have to make sure that the right picture is hanging above the cash box. If a pair of eyes is looking back at them from the wall, people will contribute twice as much as they do when the picture shows flowers. People who feel observed behave more morally.
 
SPIEGEL: And this also works if we don't even pay attention to the photo on the wall?
 
Kahneman: All the more if you don't notice it. The phenomenon is called "priming": We aren't aware that we have perceived a certain stimulus, but it can be proved that we still respond to it.
 
SPIEGEL: People in advertising will like that.
 
Kahneman: Of course, that's where priming is in widespread use. An attractive woman in an ad automatically directs your attention to the name of the product. When you encounter it in the shop later on, it will already seem familiar to you.
 
SPIEGEL: Isn't erotic association much more important?
 
Kahneman: Of course, there are other mechanisms of advertising that also act on the subconscious. But the main effect is simply that a name we see in a shop looks familiar -- because, when it looks familiar, it looks good. There is a very good evolutionary explanation for that: If I encounter something many times, and it hasn't eaten me yet, then I'm safe. Familiarity is a safety signal. That's why we like what we know.
 
SPIEGEL: Can these insights also be applied to politics?
 
Kahneman: Of course. For example, one can show that anything that reminds people of their mortality makes them more obedient.
 
SPIEGEL: Like the cross above the altar?
 
Kahneman: Yes, there is even a theory that deals with the fear of death; it's called "Terror Management Theory." You can influence people by just reminding them of something -- it can be death; it can be money. Any symbol that is associated with money, even if it's just dollar signs as a screensaver, ensures that people will pay more attention to their own interests than they will want to help others.
 
SPIEGEL: It seems that priming works primarily in favor of the political right.
 
Kahneman: It would work just as well the other way around. There's an experiment, for example, in which people were playing a game but, in the first group, it was called a "competition game" and, in the other group, it was called a "community game." And, in the latter case, people acted less selfish even though it's exactly the same game.
 
SPIEGEL: Is there no way to escape those powerful suggestions?
 
Kahneman: It isn't easy, at any rate. The problem is that we usually don't notice these influences.
 
SPIEGEL: That's pretty unsettling.
 
Kahneman: Well, it can't be too bad because we live with that all the time. That's just the way it is.
 
SPIEGEL: But we want to know what our decisions are based on!
 
Kahneman: I'm not even sure I want that, to be honest, because it would be too complicated. I don't think we really are very keen to be self-controlled all the time.
 
SPIEGEL: You say in your book that, in such cases, we leave the decisions up to "System 1."
 
Kahneman: Yes. Psychologists distinguish between a "System 1" and a "System 2," which control our actions. System 1 represents what we may call intuition. It tirelessly provides us with quick impressions, intentions and feelings. System 2, on the other hand, represents reason, self-control and intelligence.
 
SPIEGEL: In other words, our conscious self?
 
Kahneman: Yes. System 2 is the one who believes that it's making the decisions. But in reality, most of the time, System 1 is acting on its own, without your being aware of it. It's System 1 that decides whether you like a person, which thoughts or associations come to mind, and what you feel about something. All of this happens automatically. You can't help it, and yet you often base your decisions on it.
 
SPIEGEL: And this System 1 never sleeps?
 
Kahneman: That's right. System 1 can never be switched off. You can't stop it from doing its thing. System 2, on the other hand, is lazy and only becomes active when necessary. Slow, deliberate thinking is hard work. It consumes chemical resources in the brain, and people usually don't like that. It's accompanied by physical arousal, increasing heart rate and blood pressure, activated sweat glands and dilated pupils …
 
SPIEGEL: … which you discovered as a useful tool for your research.
 
Kahneman: Yes. The pupil normally fluctuates in size, mostly depending on incoming light. But, when you give someone a mental task, it widens and remains surprisingly stable -- a strange circumstance that proved to be very useful to us. In fact, the pupils reflect the extent of mental effort in an incredibly precise way. I have never done any work in which the measurement is so precise.


The Pitfalls of Intuition
 
SPIEGEL: By studying human intuition, or System 1, you seem to have learned to distrust this intuition…
 
Kahneman: I wouldn't put it that way. Our intuition works very well for the most part. But it's interesting to examine where it fails.
 
SPIEGEL: Experts, for example, have gathered a lot of experience in their respective fields and, for this reason, are convinced that they have very good intuition about their particular field. Shouldn't we be able to rely on that?
 
Kahneman: It depends on the field. In the stock market, for example, the predictions of experts are practically worthless. Anyone who wants to invest money is better off choosing index funds, which simply follow a certain stock index without any intervention of gifted stock pickers. Year after year, they perform better than 80 percent of the investment funds managed by highly paid specialists. Nevertheless, intuitively, we want to invest our money with somebody who appears to understand, even though the statistical evidence is plain that they are very unlikely to do so. Of course, there are fields in which expertise exists. This depends on two things: whether the domain is inherently predictable, and whether the expert has had sufficient experience to learn the regularities. The world of stock is inherently unpredictable.
 
SPIEGEL: So, all the experts' complex analyses and calculations are worthless and no better than simply betting on the index?
 
Kahneman: The experts are even worse because they're expensive.
 
SPIEGEL: So it's all about selling snake oil?
 
Kahneman: It's more complicated because the person who sells snake oil knows that there is no magic, whereas many people on Wall Street seem to believe that they understand. That's the illusion of validity …
 
SPIEGEL: … which earns them millions in bonuses.
 
Kahneman: There is no need to be cynical. You may be cynical about the whole banking system, but not about the individuals. Many believe they are building real value.
 
SPIEGEL: How did Wall Street respond to your book?
 
Kahneman: Oh, some people were really mad; others were quite interested and positive. It was on Wall Street, I heard, that somebody gave a thousand copies of my book to investors. But, of course, many professionals still don't believe me. Or, to be more precise, they believe me in general, but they don't apply that to themselves. They feel that they can trust their own judgment, and they feel comfortable with that.
 
SPIEGEL: Do we generally put too much faith in experts?
 
Kahneman: I'm not claiming that the predictions of experts are fundamentally worthless. … Take doctors. They're often excellent when it comes to short-term predictions. But they're often quite poor in predicting how a patient will be doing in five or 10 years. And they don't know the difference. That's the key.
 
SPIEGEL: How can you tell whether a prediction is any good?
 
Kahneman: In the first place, be suspicious if a prediction is presented with great confidence. That says nothing about its accuracy. You should ask whether the environment is sufficiently regular and predictable, and whether the individual has had enough experience to learn this environment.
 
SPIEGEL: According to your most recent book "Thinking, Fast and Slow," when in doubt, it's better to trust a computer algorithm.
 
Kahneman: When it comes to predictions, algorithms often just happen to be better.
 
SPIEGEL: Why should that be the case?
 
Kahneman: Well, the results are unequivocal. Hundreds of studies have shown that wherever we have sufficient information to build a model, it will perform better than most people.
 
SPIEGEL: How can a simple procedure be superior to human reasoning?
 
Kahneman: Well, even models are sometimes useless. A computer will be just as unreliable at predicting stock prices as a human being. And the political situation in 20 years is probably completely unpredictable; the world is simply too complex. However, computer models are good where things are relatively regular. Human judgment is easily influenced by circumstances and moods: Give a radiologist the same X-ray twice, and he'll often interpret it differently the second time. But with an algorithm, if you give it the same information twice, it will turn out the same answer twice.
 
SPIEGEL: IBM has developed a supercomputer called "Watson" that is supposed to quickly supply medical diagnoses by analyzing the description of symptoms and the patient's history. Is this the medicine of the future?
 
Kahneman: I think so. There's no magic involved.
 
SPIEGEL: Some say the next blockbuster movie could be predicted by an algorithm, as well.
 
Kahneman: Why not? The alternative is simply not very convincing. The entertainment industry wastes a lot of money on films that don't work. It shouldn't be that difficult to develop a program that at least doesn't do any worse than the intuitive judgments that govern these decisions now.
 
SPIEGEL: But most people tend to be hostile to formulas and cold calculations, and many patients prefer a doctor who treats them holistically.
 
Kahneman: It's a question of what you're used to. So-called "evidence-based medicine" is making progress, and it's based on clear, replicable algorithms. Or take the oil industry. There are strict procedures on deciding whether or not to drill in a specific location. They have a set of questions that they ask, and then they measure. Relying on intuition would be far too error-prone. After all, the risks are high, and there is a lot of money at stake.


Memory, Trauma and Time
 
SPIEGEL: In the second part of your book, you deal with the question of why we can't even rely on our memory. You claim, for example, that when a person has suffered, in retrospect, it doesn't matter to him or her how long the pain lasted. That sounds rather absurd.
 
Kahneman: The findings are clear. We demonstrated this in patients who had had a colonoscopy. In half of the cases, we asked the doctors to wait a while after having finished before removing the tube from the patients. In other words, for them, the unpleasant procedure was prolonged. And that, it turns out, greatly improved the scores that people gave to the experience. The patients clearly based their global assessments of the procedure on how it ended, and they perceived the gradual subsidence of pain as being much more pleasant. Many other experiments arrived at similar results. In some cases, subjects had to tolerate noise and, in others, they had to hold their hand in cold water. The issue is not memory: People know how long they had to endure the pain, so their memory is correct. But their evaluation of the experience is unaffected by duration.
 
SPIEGEL: How can that be?
 
Kahneman: Every experience is given a score in your memory: good, bad, worse. And that's completely independent of its duration. Only two things matter here: the peaks -- that is, the worst or best moments -- and the outcome. How did it end up?
 
SPIEGEL: So, after painful procedures, should doctors simply ask whether they might subject the patient to a few more minutes of moderate torture?
 
Kahneman: No. Because if a doctor says it's over, the episode is finished for the patient -- and that's the point at which a value is assigned. After that, a new episode begins, and no one would ask for additional pain in advance. … But it would probably be useful, mainly for patients who have suffered a trauma. My advice would be: Don't remove them from the site of the trauma to treat them elsewhere. You should try to make them feel better in the same place so that the memory of what happened to them will not be as bad.
 
SPIEGEL: Because this changes the perception they associate with that location?
 
Kahneman: No, because moving away from the location is perceived as the end of an episode, and the evaluation made at that time will be stored in memory.
 
SPIEGEL: But that doesn't prevent us from living through every bad experience again and again, as in a movie.
 
Kahneman: That certainly is the case. But what you evaluate in the end, or what you will fear in the future, that just happens to be this representative, especially intense moment, and not the entire episode. It's similar with animals, by the way.
 
SPIEGEL: How can you know that?
 
Kahneman: It's easy to study -- in rats, for example -- by giving them light electric shocks. You can vary both the intensity and the duration of the shocks. And you can measure how afraid they are. You'll see that it depends on the intensity, not on the duration.
 
SPIEGEL: In other words, our memory also informs what we expect from the future?
 
Kahneman: Exactly. This can be demonstrated with a small thought experiment I sometimes ask people to do: Suppose you go on a vacation and, at the end, you get an amnesia drug. Of course, all your photographs are also destroyed. Would you take the same trip again? Or would you choose one that's less challenging? Some people say they wouldn't even bother to go on the vacation. In other words, they prefer to forsake the pleasure, which, of course, would remain completely unaffected by its being erased afterwards. So they are clearly not doing it for the experience; they are doing it entirely for the memory of it.
 
SPIEGEL: Why is it so important for us to imagine our lives as a collection of stories?
 
Kahneman: Because that's all we keep from life. It's going by, and you are left with stories. That's why people exaggerate the importance of memories.
 
SPIEGEL: But, if I'm planning a vacation, I wouldn't accept being terribly bored most of the time just for the sake of a few highlights.
 
Kahneman: Of course not. And if I ask you whether you would rather tolerate pain for three minutes or five minutes, the answer is just as clear. But, in retrospect, the vacation that left you with the best memories wins out. How long you've been bored between the memorable moments is no longer relevant.
 
SPIEGEL: It was rather exhausting and difficult for you to write the book. You must remember how long it lasted, that is, the duration.
 
Kahneman: That's true. I could very quickly go through a film of four years of pain, but mostly I remember moments -- and most of them are bad.
 
SPIEGEL: Do you re-evaluate this period of time now that the book has become such a big success?
 
Kahneman: There is much less pain associated with the memory now. In my mind, if the book had done less well, I would feel even worse about what happened to me during those years. So, clearly, what happens later changes the story.
 
SPIEGEL: Would we even start such a challenging project a second time if it weren't for the partial amnesia?
 
Kahneman: Well, you don't know how much pain you are going to have. But, later on, we remember the great relief we felt after completing the task. … In childbirth, for example, it's all about the story that ends well, and that offsets what may have been horrible until then. It's as if we were divided into an experiencing self, which has to endure the strain, and a remembering self, which doesn't care at all.


Happiness and the Remembering Self
 
SPIEGEL: So, do we have our remembering self to thank for the fact that we courageously go out in search of adventure and memorable moments in life? Would we otherwise simply be content with long, dull periods of moderate well-being?
 
Kahneman: Yes, our lives are governed by the remembering self. Even when we're planning something, we anticipate the memories we expect to get out of it. The experiencing self, which may have to put up with a lot in return, has no say in the matter. Besides, what the experiencing self has enjoyed can be completely devaluated in retrospect. Someone once told me that he had recently listened to a wonderful symphony but, unfortunately, at the end, there was a terrible screeching sound on the record. He said that ruined the whole experience. But, of course, the only thing it ruined was the memory of the experience, (which was) still a happy experience.
 
SPIEGEL: Does that also apply to an entire life? Is it all about the end?
 
Kahneman: Yes, in a sense. We can't help but look at life retrospectively, and we want it to look good in retrospect. There was once an experiment in which the subjects were supposed to evaluate the life of a fictitious woman who had had a very happy life but then died in an accident. Astonishingly, whether she died at 30 or 60 had no effect whatsoever on their evaluation. But when the subjects were told that the woman had had 30 happy years followed by five that were no so happy, the scores got worse. Or imagine a scientist who has made an important discovery, a happy and successful man, and after his death it turns out that the discovery was false and isn't worth anything. It spoils the entire story even though absolutely nothing about the scientist's life has changed. But now you feel pity for him.
 
SPIEGEL: Would you go so far as to say that it's the remembering self that makes us human? Animals probably don't collect memorable moments.
 
Kahneman: Well, I actually think that animals do because they must score experiences as worth repeating and others as worth avoiding. And, from the evolutionary point of view, that makes sense. The duration of an experience is simply not relevant. What matters for survival is whether it ended well and how bad it got. This also applies to animals.
 
SPIEGEL: In your view, the remembering self is very dominant -- to the point that it seems to have practically enslaved the experiencing self.
 
Kahneman: In fact, I call it a tyranny. It can vary in intensity, depending on culture. Buddhists, for example, emphasize the experience, the present; they try to live in the moment. They put little weight on memories and retrospective evaluation. For devout Christians, it's completely different. For them, the only thing that matters is whether they go to heaven at the end.
 
SPIEGEL: People reading your book will sympathize with the poor experiencing self, which essentially has to do our living.
 
Kahneman: That was my intention. Readers should realize that there is another way of looking at it. I would say it's comforting for me because both my wife and I complain all the time that our memories are terrible. We don't really go to the theater to remember what we've seen later on, but to enjoy the performance. Other people live through life collecting experiences like you collect pictures.
 
SPIEGEL: In other words, they think that only a wealth of memories can make them happy.
 
Kahneman: Here we have to distinguish between satisfaction and happiness. When you ask people whether they're happy, their answers can differ widely depending on their current mood. Let me give you an example: For years, the Gallup institute has been polling about a thousand Americans on various issues, including their well-being. One of the most surprising findings is that, when the first question is about politics, people immediately consider themselves less happy.
 
SPIEGEL: True calamities, on the other hand, seem to have surprisingly little effect on well-being. Paraplegics, for example, hardly differ from healthy individuals in terms of their satisfaction with life.
 
Kahneman: At any rate, the difference is smaller than one would expect. That's because, when we think of paraplegics, we are subject to an illusion that is hard to escape: We automatically focus on all the things that change as a result of the disability, and we overlook what is still the same in everyday life. It's similar with income. Everyone wants to make more money, and yet the salary level -- at least above a certain threshold -- has no influence whatsoever on emotional happiness, although life satisfaction continues to rise with income.
 
SPIEGEL: And where is that threshold?
 
Kahneman: Here in the United States, it's at a household income of about $75,000 (€60,000). Below that, it makes a substantial difference. It's terrible to be poor. No matter if you are sick or going through a divorce, everything is worse if you're poor.
 
SPIEGEL: So, is it harder to get used to illness or disability than poverty?
 
Kahneman: I think we adapt more quickly to improvement than to deterioration.
 
SPIEGEL: Professor Kahneman, we thank you for this interview.

Interview conducted by Manfred Dworschak and Johann Grolle.