Pages

Saturday, July 28, 2012

Barry Schwartz – Practical Wisdom: The Right Way to Do the Right Thing


The Association for Psychological Science posted this video from the APS 24th Annual Convention (2012) featuring Barry Schwartz speaking about his book, Practical Wisdom: The Right Way to Do the Right Thing.

Below the video is the first entry from the Practical Wisdom blog at Psychology Today, which offers a good overview of the book. For a shorter video, you can view the TED Talk, Using Our Practical Wisdom.

Barry Schwartz – Practical Wisdom: The Right Way to Do the Right Thing



Practical Wisdom: The Right Way to Do the Right Thing

Why neither rules nor incentives are enough to solve the problems we face.



We Americans are growing increasingly disenchanted with the institutions on which we depend. We can't trust them. They disappoint us. They fail to give us what we need. This is true of schools that are not serving our kids as well as we think they should. It is true of doctors who seem too busy to give us the attention and unhurried care we crave. It's true of banks that mismanage our assets, and of bond-rating agencies that fail to provide an accurate assessment of the risk of possible investments. It's true of a legal system that seems more interested in expedience than in justice. It's true of a workplace in which we fulfill quotas and hit targets and manage systems but wind up feeling disconnected from the animating forces that drew us to our careers in the first place.

And the disenchantment we experience as recipients of services is often matched by the dissatisfaction of those who provide them. Most doctors want to practice medicine as it should be practiced. But they feel helpless faced with the challenge of balancing the needs and desires of patients with the practical demands of hassling with insurance companies, earning enough to pay malpractice premiums, and squeezing patients into seven-minute visits--all the while keeping up with the latest developments in their fields. Most teachers want to teach kids the basics and at the same time excite them with the prospects of educating themselves. But teachers feel helpless faced with the challenge of reconciling these goals with mandates to meet targets on standardized tests, to adopt specific teaching techniques, and to keep up with the ever-increasing paperwork. No one is satisfied--not the professionals and not their clients.

When we try to make things better, we generally reach for one of two tools. The first tool is a set of rules and administrative oversight mechanisms that tell people what to do and monitor their performance to make sure they are doing it. The second tool is a set of incentives that encourage good performance by rewarding people for it. The assumption behind carefully constructed rules and procedures, with close oversight, is that even if people do want to do the right thing, they need to be told what that is. And the assumption underlying incentives is that people will not be motivated to do the right thing unless they have an incentive to do so. Rules and incentives. Sticks and carrots. What else is there?

This blog is an attempt to answer this question. In our new book, Practical Wisdom: The Right Way to Do the Right Thing, we acknowledge the need for both rules and incentives. But rules and incentives are not enough. They leave out something essential. It is what classical philosopher Aristotle called practical wisdom (his word was phronesis). Without this missing ingredient, neither rules (no matter how detailed and well monitored) nor incentives (no matter how clever) will be enough to solve the problems we face.

The term practical wisdom sounds like an oxymoron to modern ears. We tend to think of "wisdom" as the opposite of "practical." Wisdom is about abstract, ethereal matters like "the way" or "the good" or "the truth" or "the path," and it is the province of special sages. Aristotle's teacher, Plato, shared this view that wisdom was theoretical and abstract, and the gift of only a few. But Aristotle disagreed. He thought that our fundamental social practices constantly demanded choices--like when to be loyal to a friend, or how to be fair, or how to confront risk, or when and how to be angry--and that making the right choices demanded wisdom. Aristotle distilled the idea of practical wisdom in his classic book, Nicomachean Ethics. Ethics, said Aristotle, was not mainly about establishing moral rules and following them. It was about performing a particular social practice well-being a good friend or parent or doctor or soldier or citizen or statesmen--and that meant figuring out the right way to do the right thing in a particular circumstance, with a particular person, at a particular time.

This is what took practical wisdom. Aristotle's Ethics was about what we needed to learn to succeed at our practices and to flourish as human beings. We needed to learn certain character traits like loyalty, self-control, courage, fairness, generosity, gentleness, friendliness, and truthfulness--a list that today would also include perseverance, integrity, open-mindedness, thoroughness, and kindness. Aristotle called these traits "excellences" (arete)-often translated as "virtues." But the master excellence--the virtue at the heart of his Ethics--was practical wisdom. None of these other traits could be exercised well without it.

Why "wisdom"? Why "practical"? Why not just a good set of rules to follow? Most experienced practitioners know that rules only take them so far. Consider the doctor. How should the doctor balance respect for the autonomy of her patients when it comes to making decisions with the knowledge that sometimes the patient is not the best judge of what is needed? How should the doctor balance the desire to spend enough time with each patient to be thorough, compassionate, and understanding with the need to see enough patients in a day to keep the office solvent? How should the doctor balance the desire to tell patients the truth, no matter how difficult, with the desire to be kind?

Doctors--and teachers attempting to teach and inspire, or lawyers attempting to provide proper counsel and serve justice--are not puzzling over a choice between the "right" thing and the "wrong" thing. The common quandaries they face are choices among right things that clash. A good doctor needs to be honest with her patients, and kind to her patients, and to give them the hope they need to endure difficult treatments. But in diagnosing or treating a patient, these aims can be at odds, and the doctor must decide whether to be honest or kind, or more likely how to balance honesty and kindness in a way that is appropriate for the patient in front of her.

Aristotle recognized that balancing acts like these beg for wisdom, and that abstract or ethereal wisdom would not do. Wisdom has to be practical because the issues we face are embedded in our everyday work. They are not hypotheticals being raised in college ethics courses. They are quandaries that any practitioner must resolve to do her work well. Practical wisdom combines the will to do the right thing with the skill to figure out what the right thing is.

In this blog, we will describe the essential characteristics of practical wisdom, and show why it's needed to inform the everyday activities of doctors, lawyers, and teachers--parents, lovers, and friends. We will discuss some impressive examples of wisdom--and its absence-in practice. We will show that rules and incentives--the tools we reach for to improve our schools or our clinics, or even our banks--are no substitute for wisdom. Worse, they are the enemies of wise practice. And finally we will suggest that when wisdom is cultivated it is not only good for society but is, as Aristotle thought, a key to our own happiness. Wisdom isn't just something we "ought" to have. It's something we want to have to flourish. Our book makes all of these points in detail. In the blog, we will try to give you a taste of the arguments in the book.

We've been working together on practical wisdom, and teaching courses in it, for a decade. During that time, we have seen the failure of the institutions we rely on grow more acute, and the need for wisdom grow more urgent. We hope this blog will help people appreciate the central importance of practical wisdom, and induce them to ask how they can nurture it in their own lives and in the lives of the people with whom they live and work.

Attention and Consciousness Rely on Distinct Neural Mechanisms

The information in this post is related to my ongoing series on Bernard Baars' Global Workspace Theory of consciousness (see Part One, Part Two, and Part Three - part four is in process). In Baars' model, attention and consciousness are not identical, whereas many other cognitive models of consciousness do see them as either identical, or view attention as an inseparable aspect of consciousness (Posner, 1994; Jackendoff, 1996; Velmans, 1996; Merikle and Joordens, 1997; Mack and Rock, 1998; Chun and Wolfe, 2000; O’Regan and Noe, 2001; Mole, 2008; De Brigard and Prinz, 2010; Prinz, 2010).

In a 2010 article, Christof Koch and his team (Consciousness and attention: On sufficiency and necessity) reviewed an extensive body of research showing that attention and consciousness can be examined independently. Here is the abstract for that paper:
Recent research has slowly corroded a belief that selective attention and consciousness are so tightly entangled that they cannot be individually examined. In this review, we summarize psychophysical and neurophysiological evidence for a dissociation between top-down attention and consciousness. The evidence includes recent findings that show subjects can attend to perceptually invisible objects. More contentious is the finding that subjects can become conscious of an isolated object, or the gist of the scene in the near absence of top-down attention; we critically re-examine the possibility of “complete” absence of top-down attention. We also cover the recent flurry of studies that utilized independent manipulation of attention and consciousness. These studies have shown paradoxical effects of attention, including examples where top-down attention and consciousness have opposing effects, leading us to strengthen and revise our previous views. Neuroimaging studies with EEG, MEG, and fMRI are uncovering the distinct neuronal correlates of selective attention and consciousness in dissociative paradigms. These findings point to a functional dissociation: attention as analyzer and consciousness as synthesizer. Separating the effects of selective visual attention from those of visual consciousness is of paramount importance to untangle the neural substrates of consciousness from those for attention.
In order to help explicate how consciousness and awareness inter-relate, Koch has created a 2x2 visual representation (attention × consciousness design matrix). The graph allows for the categorization of different objects, stimuli, or features in terms of whether they give rise to consciousness and if they require attentional top-down conscious processing.
Previously, we argued that each behavior or percept can be categorized within a 2 × 2 design matrix, defined by whether it gives rise to consciousness and whether it requires top-down attentional amplification (Koch and Tsuchiya, 2007).

The lower right quadrant of our attention × consciousness design matrix (Table 1) is filled with behaviors or percepts in which attention is necessary for them to give rise to consciousness. For example, an unexpected and unfamiliar stimulus requires top-down attention in order to be consciously perceived. Otherwise, such a stimulus goes unnoticed, a phenomenon called inattentional blindness (Mack and Rock, 1998).

TABLE 1
Table 1. A four-fold classification of percepts and behaviors depending on whether or not top-down attention is necessary and whether or not these percepts and behaviors give rise to phenomenal consciousness. Different percepts and behaviors are grouped together according to these two, psychophysically defined, criteria.

At the top-left of the table are behaviors or percepts that do not require the deployment of top-down attention, and that can occur in the absence of conscious perception. For instance, a perceptually invisible grating that is not attended will still lead to a visible afterimage (e.g., van Boxtel et al., 2010). That is, the formation of afterimages can be independent of paying attention to the inducer nor of consciously perceiving it.

In the first half of the review, we focus on the rest of the matrix: attention without consciousness (bottom-left) and consciousness without attention (top-right). We examine whether attention is necessary and/or sufficient for consciousness.

While many scholars agree that attention and consciousness are distinct, it is popular to assume that attention is necessary for consciousness. For example, Dehaene et al. (2006) argue that without top-down attention, an event cannot be consciously perceived and remains in a preconscious state. Another view is that attention and consciousness are so intertwined that they cannot be operationally separated (O’Regan and Noe, 2001; De Brigard and Prinz, 2010; Prinz, 2010).
The conclusion of the first half of their paper is the point I am after here: "Attention to a stimulus or an attribute of this stimulus is neither strictly necessary nor sufficient for the stimulus or its attribute to be consciously perceived."

As I mentioned above, Baars (2005) is one of the cognitive theorists of consciousness who believe that attention and consciousness are not only separable but rely on unique neural systems - others, as cited by Koch, include Wundt, 1874; Iwasaki, 1993; Hardcastle, 1997; Naccache et al., 2002; Lamme, 2003; Woodman and Luck, 2003; Kentridge et al., 2004; Koch, 2004Block, 2005; Bachmann, 2006; Dehaene et al., 2006; Koch and Tsuchiya, 2007; Tsuchiya and Koch, 2008a,b.

A more recent paper, On the Neural Mechanisms Subserving Consciousness and Attention, by Catherine Tallon-Baudry (2012, Jan, online), offers additional support for this perspective. Here is her abstract:
Consciousness, as described in the experimental literature, is a multi-faceted phenomenon, that impinges on other well-studied concepts such as attention and control. Do consciousness and attention refer to different aspects of the same core phenomenon, or do they correspond to distinct functions? One possibility to address this question is to examine the neural mechanisms underlying consciousness and attention. If consciousness and attention pertain to the same concept, they should rely on shared neural mechanisms. Conversely, if their underlying mechanisms are distinct, then consciousness and attention should be considered as distinct entities. This paper therefore reviews neurophysiological facts arguing in favor or against a tight relationship between consciousness and attention. Three neural mechanisms that have been associated with both attention and consciousness are examined (neural amplification, involvement of the fronto-parietal network, and oscillatory synchrony), to conclude that the commonalities between attention and consciousness at the neural level may have been overestimated. Last but not least, experiments in which both attention and consciousness were probed at the neural level point toward a dissociation between the two concepts. It therefore appears from this review that consciousness and attention rely on distinct neural properties, although they can interact at the behavioral level. It is proposed that a “cumulative influence model,” in which attention and consciousness correspond to distinct neural mechanisms feeding a single decisional process leading to behavior, fits best with available neural and behavioral data. In this view, consciousness should not be considered as a top-level executive function but should rather be defined by its experiential properties.
At the recent Evolution and Function of Consciousness Summer School ("Turing Consciousness 2012") held at the University of Montreal as part of Alan Turing Year, Tallon-Baudry lectured on the topic of this paper: Is Consciousness an Executive Function?





The reason this is important (as I will discuss further in the series of posts on Global Workspace Theory and the Future Evolution of Consciousness) is that this model offers extensive neurobiological support for a variety of human development theories.

For example, the ability to focus attention on the self as an object and to experience this distal part of the self subjectively in consciousness - the adult development model of Robert Kegan - is given a neurobiological substrate that has been missing so far.

Likewise, if consciousness and attention operate with distinct neural systems, then we are able to explain how the brain is able to focus attention on other brain functions (such as the verbal stream of consciousness) and hold them as objects in consciousness, which is the basis of meditation and mindfulness practice.

According to Baars and others, we can focus attention on any brain function and make it conscious - the proof of this is those monks who have demonstrated an ability to alter their brainwave functions, the quality and quantity of brain function, and so on. The take-home message is that we can use attention to "evolve" our own brain function and potentially increase our developmental level through focused attention, mindfulness, and other practices.

Equally important, meditation can increase the holding space of working memory (which is roughly 7 unrelated objects), and working memory is essentially the space of our momentary consciousness. Van Leeuwen, Singer, and Melloni (2012), in Meditation increases the depth of information processing and improves the allocation of attention in space, demonstrated essentially what the title of their paper indicates.

These authors hypothesized that the process of meditation, which requires that we learn to ("respond to the dual demand") focus our attention on a specific object (mantra, the breath, flame, etc.) and also learn how to recognize and stop intrusive thoughts ("disengaging quickly from distracters"), can strengthen the efficiency of how we allocate attention. The length of their study was basically a four-day meditation retreat (they used focused attention meditation and open monitoring meditation). They conclude:
[T]hese results suggest that practicing meditation enhances the speed with which attention can be allocated and relocated, thus increasing the depth of information processing and reducing response latency.
In a related study, Metta McGarvey at Harvard (2010) argues that "mindful awareness catalyzes transformational change through optimally integrating conceptual and pre-conceptual ways of knowing" (Mindful Leadership Study). Part of her research compared an Integral Practice with basic mindfulness practice and found IP more highly correlated with mindfulness scores (brief results here).

Anyway, I will have more to say about attention and consciousness in a future post.

Mark Pagel - Wired for Culture: Origins of the Human Social Mind


Here are two reviews of Mark Pagel's recent book, Wired for Culture: Origins of the Human Social Mind.The first is from Maria Popova at Brain Pickings (a brief but positive review) and the other is from Steven Rose at the Times Higher Education blog (very derogatory review).

Wired for Culture: How Language Enabled “Visual Theft,” Sparked Innovation, and Helped Us Evolve

by

Why remix culture and collaborative creativity are an evolutionary advantage.

Much has been said about what makes us human and what it means to be human. Language, which we’ve previously seen co-evolved with music to separate us from our primal ancestors, is not only one of the defining differentiators of our species, but also a key to our evolutionary success, responsible for the hallmarks of humanity, from art to technology to morality. So argues evolutionary biologist Mark Pagel in Wired for Culture: Origins of the Human Social Mind — a fascinating new addition to these 5 essential books on language, tracing 80,000 years of evolutionary history to explore how and why we developed a mind hard-wired for culture.
Our cultural inheritance is something we take for granted today, but its invention forever altered the course of evolution and our world. This is because knowledge could accumulate as good ideas were retained, combined, and improved upon, and others were discarded. And, being able to jump from mind to mind granted the elements of culture a pace of change that stood in relation to genetical evolution something like an animal’s behavior does to the more leisurely movement of a plant.
[…]
Having culture means we are the only species that acquires the rules of its daily living from the accumulated knowledge of our ancestors rather than from the genes they pass to us. Our cultures and not our genes supply the solutions we use to survive and prosper in the society of our birth; they provide the instructions for what we eat, how we live, the gods we believe in, the tools we make and use, the language we speak, the people we cooperate with and marry, and whom we fight or even kill in a war.”
But how did “culture” develop, exactly? Language, says Pagel, was instrumental in enabling social learning — our ability to acquire evolutionarily beneficial new behaviors by watching and imitating others, which in turn accelerated our species on a trajectory of what anthropologists call “cumulative cultural evolution,” a bustling of ideas successively building and improving on others. (How’s that for bio-anthropological evidence that everything is indeed a remix?) It enabled what Pagel calls “visual theft” — the practice of stealing the best ideas of others without having to invest the energy and time they did in developing those.


It might seem, then, that protecting our ideas would have been the best evolutionary strategy. Yet that’s not what happened — instead, we embraced this “theft,” a cornerstone of remix culture, and propelled ourselves into a collaboratively crafted future of exponential innovation. Pagel explains:
Social learning is really visual theft, and in a species that has it, it would become positively advantageous for you to hide your best ideas from others, lest they steal them. This not only would bring cumulative cultural adaptation to a halt, but our societies might have collapsed as we strained under the weight of suspicion and rancor.
So, beginning about 200,000 years ago, our fledgling species, newly equipped with the capacity for social learning had to confront two options for managing the conflicts of interest social learning would bring. One is that these new human societies could have fragmented into small family groups so that the benefits of any knowledge would flow only to one’s relatives. Had we adopted this solution we might still be living like the Neanderthals, and the world might not be so different from the way it was 40,000 years ago, when our species first entered Europe. This is because these smaller family groups would have produced fewer ideas to copy and they would have been more vulnerable to chance and bad luck.
The other option was for our species to acquire a system of cooperation that could make our knowledge available to other members of our tribe or society even though they might be people we are not closely related to — in short, to work out the rules that made it possible for us to share goods and ideas cooperatively. Taking this option would mean that a vastly greater fund of accumulated wisdom and talent would become available than any one individual or even family could ever hope to produce. That is the option we followed, and our cultural survival vehicles that we traveled around the the world in were the result.”
“Steal like an artist” might then become “Steal like an early Homo sapiens,” and, as Pagel suggests, it is precisely this “theft” that enabled the origination of art itself.

Sample Wired for Culture with Pagel’s excellent talk from TEDGlobal 2011:



* * * * * * *

Wired for Culture: The Natural History of Human Cooperation

8 March 2012

'Brain candy' is hard to swallow

A grand biological theory for what makes us so special does not convince Steven Rose

Charles Darwin was clear about it. "I am convinced that natural selection has been the main but not the exclusive means of modification," he wrote in the introduction to On the Origin of Species in 1859. Unfortunately, many of Darwin's modern acolytes seem to have forgotten the master's cautionary words and turn one mechanism among several into a totalising theory. Mark Pagel, a distinguished evolutionary biologist, is but the latest. His central concern - and an important one for anyone interested in human history and society - is to understand the origins of those human attributes that mark us out most distinctly from other living forms: our capacities for speech, social organisation and the creation of culture and technology.

Biologists are of course committed to the view that these are evolved properties. For one unorthodox strand of evolutionary theory, whose early protagonist was the Russian anarchist prince Peter Kropotkin but which was championed in modern form by theorists including Lynn Margulis and David Sloan Wilson, this presents no problem: cooperation between individuals and even between species can serve as a motor of evolutionary change. But for the dominant strand of ultra-Darwinists, natural selection works by ruthless competition between individuals of the same species, and any trait must have a selective advantage to the individual - that is, the genes that confer or enable it must have enhanced our ancestors' reproductive success. Hence the problem: how could cooperation, empathy and altruism, even to the extent of sacrificing one's life to save others, increase a person's chance of transferring copies of his or her genes to the next generation?

Pagel's mentor in these matters, Richard Dawkins, distinguished between genes as replicators and organisms as passive vehicles utilised by the genes to ensure their transmission into the next generation. That is, you and I are merely our genes' way of making copies of themselves. Dawkins, and following him Pagel, then introduces two mutually contradictory hypotheses to solve the problem of culture and seemingly non-adaptive traits. The first proposes that culture enhances our genes' chances: the typically masculinist example is that being good at art or music ("brain candy", says Pagel) or having a reputation for valour serves a man like the peacock's tail serves the peacock - a sign to the female that its owner carries good genes and therefore is worth mating with. When writing in this mode, Pagel suggests that, just as organisms are survival vehicles for genes, so social structures - such as groups or tribes, which require a degree of mutual trust and cooperation - are cultural survival vehicles for their members, and hence their genes. Thus a group's culture is part of each individual group member's extended phenotype.

The second, contradictory hypothesis is that culture is an autonomous life form; it consists of units like genes, called memes, that inhabit an individual's mind and are transmitted between individuals through our unique powers of communication. Typical memes are advertising jingles or fashion practices, such as wearing baseball caps backwards. Notoriously, Dawkins proposed, and Pagel follows him, that religion too is a meme - but in this case an infective, hostile virus that poisons the minds it takes over. But as a coherent theory, memeology is about as convincing as Scientology; anything that can embrace religion, advertising jingles and fashions in headwear as if they are all examples of the same unitary thing, a meme, which can jump from brain to brain, should have been laughed out of court years ago, as philosopher Mary Midgley has cogently argued.

In Wired for Culture, Pagel employs both these hypotheses in broad speculations ranging from the evolutionary motivations of suicide bombers to the origins of speech, without apparently recognising that they are contradictory. Confusions of level abound, as when Pagel says: "Our brains can effortlessly think..."But it is we who think, using our brains. Written in a patronising tone and replete with fairy stories about Pleistocene men cooperating by agreeing that one should sharpen spears while the other chips stone hand axes (presumably the women are home doing the cooking as usual), the book misses entirely one of the most convincing arguments for the evolution of human sociality, centred on human mothers' unique preparedness to share the parenting of their children (called "alloparenting" by the evolutionary biologist Sarah Blaffer Hrdy). Grand unitary theories of everything used to be the province of physicists. It's a pity that biology has shed its modesty; we have enough to say about things we do know about without trying to take over the world.

By Mark Pagel
Allen Lane, 432pp, £25.00
ISBN 9781846140150
Published 1 March 2012
 
Reviewer: Steven Rose is emeritus professor of life science, The Open University.

Friday, July 27, 2012

William Hirstein, Ph.D. - Ten Tests for Theories of Consciousness

Nice post - and this list is very useful in judging all of the competing theories of consciousness that are being thrown around as though each one is the only truth.

About the author:
William Hirstein is both a philosopher and a scientist, having published numerous scientific articles, including works on phantom limbs, autism, consciousness, sociopathy, and the misidentification syndromes. He is the author of several books, including On Searle (Wadsworth, 2001), On the Churchlands (Wadsworth, 2004), Brain Fiction: Self-Deception and the Riddle of Confabulation (MIT, 2005), and Mindmelding: Consciousness, Neuroscience, and the Mind’s Privacy (Oxford, 2012).

The article:

A Philosophical-Scientific Decathlon

This list comprises a sort of decathlon for theorists of consciousness to test their creations against. Some of the tests might have ‘skeptical solutions’, in that theorists might claim that we were mistaken in thinking that they were relevant to consciousness at all. Even in this case, however, an account of error is owed: Why did we believe that they were relevant to consciousness?

1.      Relate consciousness to mind. Are there unconscious mental states or unconscious parts of the mind? How do they relate to the conscious mental states or conscious parts of the mind?

2.      Relate consciousness to perception. How do sound waves, light waves, etc. entering the sense organs get transformed into conscious perceptions? Why does all perception have a focus-background structure? Or does it?

3.      Relate consciousness to dreams. Are dreams conscious states? If they are conscious states, how are they different from our normal waking conscious states, and how are these differences to be explained?

4.      Relate consciousness to the self. Is there a self? If there is a self, what is it? How does the self relate to the mind? Is it the entirety of the mind, or some portion of it? Where is it in the brain? What is its function? If there is not a self, why have so many people believed there is one?

5.      Relate consciousness to representation. Are all conscious states representations of some sort? Can there be non-conscious beings who can nevertheless represent the world? How do mental representations relate to external representations, such as photographs and paintings?

6.      Relate consciousness to the brain. If consciousness has no intrinsic connection to the brain, why do things that affect the brain, such as blows to the head, psychoactive drugs, etc., also affect the mind? Where in the brain are conscious states located? Which brain processes correspond to (or are identical to) conscious states?

7.      Explain what the function of consciousness is. Given that any external behavior can be generated by a number of internal mechanisms, some involving consciousness, but some not, why do our brains use consciousness? Are there ways to achieve consciousness other than the way that our brains do it?

8.      Provide an account of the explanatory gap. Why are conscious states, as we experience them, so different from the brain, as we look at it from the outside? In short, how does something that looks like that produce something that feels like this?

9.      Provide an account of error for the other major theories. It is often the case that asking the proponents of a theory to explain where the other accounts went wrong serves to greatly clarify that theory, in addition to the obvious testing function the exercise serves. Accounts of error need to plausible, in that they cannot depict the proponents of the other views as blithering idiots, obstinately making the same obvious mistakes over decades or centuries. To continue our decathlon analogy, here we are asking the decathletes to fight one another.

10.   Relate your account to the history of the study of consciousness. Who, if any, of the historical writers on consciousness were right, and who were wrong, and why? Was Rene Descartes right about mind and matter being metaphysically separate, or about there being a substantial self or Cartesian ego? If not, where exactly did he go wrong in his thinking?  Was Thomas Nagel right when he said we could never really know what it’s like to be a bat?

No doubt this list will evolve over time. As we get closer to a correct account, certain parts of the list might begin to fall out as uncontroversial, while new entries are added. The new entries might come from the scientific side, in the form of questions based on discoveries. Or they might come from the philosophical side, in the form of inferences derived from the testing of theories via arguments and counterexamples. Ideally, the set of tests would become more and more complete and exacting, until only one theory can pass them, the correct theory.

Face to Face with Carl Jung: ‘Man Cannot Stand a Meaningless Life’

This is an excellent video of Carl Jung talking about our human need for a meaningful life (among many other things). I found this at Open Culture - yesterday was Jung's birthday and posting this video was their tribute. This is one of the best interviews I have seen with Jung, and he was 84 at the time this was recorded.

Face to Face with Carl Jung: ‘Man Cannot Stand a Meaningless Life’

in Psychology | July 26th, 2012



Today is the birthday of Carl Gustav Jung, founder of analytic psychology and explorer of the collective unconscious. He was born on July 26, 1875 in the village of Kesswil, in the Thurgau canton of Switzerland. To observe the day we present a fascinating 39-minute interview of Jung by John Freeman for the BBC program Face to Face. It was filmed at Jung’s home at Küsnacht, on the shore of Lake Zürich, and broadcast on October 22, 1959, when Jung was 84 years old. He speaks on a range of subjects, from his childhood and education to his association with Sigmund Freud and his views on death, religion and the future of the human race. At one point when Freeman asks Jung whether he believes in God, Jung seems to hesitate. “It’s difficult to answer,” he says. “I know. I don’t need to believe. I know.”

David Lance - A new kind of creature … brought to life

This is pretty cool . . . from the TED blog.

A new kind of creature … brought to life

.

At TED2007, artist Theo Jansen shared his work creating a new form of life — which can actually survive on its own — from plastic tubes and bottles. In this 3D-animation film, David Lance imagines Jansen’s creature walking through a park, morphing into metal and becoming a spider-like form that can jump cars, fly and, eventually, talk.


Thursday, July 26, 2012

David Berreby - Psychologists Assume It's Possible to Know A Person: What If They're Wrong?

This is an important and interesting post by David Berreby at Big Think's Mind Matters blog. In the aftermath of James Holmes apparent killing of 12 people (with 58 others wounded) - for absolutely no identifiable reason (so far), "our modern priesthood of experts" is recruited by every news show to "explain" why he possibly could have done this horrible thing.
And they look to their correlations of variables, their indicators and theories, and find precisely nothing. Where are the indicators we want to see, the ones we can associate with senseless slaughter? Where are the traits whose presence would reassure us that it is possible to know who is vulnerable to the lure of mass murder?

There are none. No indications of mental illness earlier in life. No signs of a violent or troubled childhood. No signs of drug abuse. Instead, traces of a mild blank person, whose signature trait seemed to be that he left little impression at all on other people. A background guy, the sort who doesn't make us alert for trouble. His story threatens formal psychology and folk psychology, because it tells us that another person can't be known, not for certain. (That he was himself a grad student in neuroscience, aiming to elucidate how the brain causes behavior, adds a quality of mockery to the tale.)
 It's not a long column, but it's worth the read.


James-holmes

Psychologists Assume It's Possible to Know A Person. What If They're Wrong?

David Berreby on July 23, 2012 [updated on 7/24/2012]

John Marzluff - Gifts of the Crow: How Perception, Emotion, and Thought Allow Smart Birds to Behave Like Humans

Amazon has a book review feature that I had kind of forgotten about, called Omnivoracious. An old friend from Seattle, who knows I love all things related to crows and ravens, sent me a link to the review of a new book by John Marzluff called, Gifts of the Crow: How Perception, Emotion, and Thought Allow Smart Birds to Behave Like Humans. Martzluff is a professor at the University of Washington, in Seattle, which has one of the densest populations of crows on the planet.

Here is the brief review (which is actually a commentary by the author on why crows?):

Are Crows Smarter Than Us? John Marzluff Explains

We’ve seen a few fun bird books this year, including Bird Sense: What It is Like to Be a Bird, a Best of the Month pick in April, and What The Robin Knows: How Birds Reveal the Secrets of the Natural World.

Now comes John Marzluff's captivating Gifts of the Crow: How Perception, Emotion, and Thought Allow Smart Birds to Behave Like Humans. Among other shockers I learned that crows (and their kin, ravens and jays) have huge brains and street smarts, they drink coffee and beer, they use tools and language, and they're even capable of murder.

So we asked the author, What’s the deal with crows? Are they, like, the smartest birds on the planet? Here's what Martzluff, a University of Washington professor, has to say.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MarzluffJohnHere’s the deal with crows. They are basically flying monkeys. Their brains are as large as a monkey’s brain of their size would be; much larger than other birds. Remember the Wizard of Oz? That pack of flying primates that hunted Dorothy and her companions had nothing (stylish hats and coats aside) on the crows that roam your backyard. The monkeys seemed only to express the emotion of fear or anger, and were fully under the control of the Wicked Witch of the West. Heck in the movie they couldn’t even speak! Crows, on the other hand, always seem to express their free will. They can imitate human voice and often do so! In Montana, one crow was so adept at mimicking a master’s “Here Boy, Come Boy” that it could call dogs, and did so on many occasions. This talking crow even assembled a pack of mutts by flying from house to house and fooling dogs into thinking they were following their owner! With his pack in tow, the crow headed to the University of Montana campus, kept them at attention beneath a tree, and ran them through students as they walked between classrooms! For fun or to possibly dislodge a sandwich remains unknown, but certainly Oz’s monkeys couldn’t do that!

CrowsCrows are super smart because they have long lifespans, spend considerable portions of their lives with others in social groups, and learn quickly through trial and error and by observation. Their brains, like our own, allow crows to form lasting, emotionally charged memories. They dream and reconsider what they see and hear before acting (something we aren’t always so good at). They go through their lives much as we do—sensing, considering, forming a plan of action, and adjusting. They can recognize and remember individual people, for years. In Sweden, magpies (close relatives of crows) learned to ring a doorbell whenever they saw the lady of house because she occasionally fed them. Her husband never fed them and even attempted to shoo them away. In retaliation they crapped on his car windshield every morning. It’s a good thing monkeys can’t really fly.

CROWSBut are crows the smartest birds on the planet? I wouldn’t go quite that far but certainly they are among the smartest. As with people, it is hard to develop a standardized test of intelligence with which to score and rank birds. If the test required making a tool, then the New Caledonia crow would win. If, on the other hand, the test required the creation of a new word, then the African Gray Parrot would likely win. We humans are today administering perhaps the toughest test ever on those with whom we share Earth. Our rapid transformation of land, cooption of resources, and change of climate challenges all plants and animals. To this test, again the crow is well suited. Their large and complex brains allow crows to innovate, intuit, and quickly associate reward and punishment with action. Through this complex cognition they are able to solve whatever humanity throws their way: they adapt to new foods, new lands, new resources, and likely a new climate. So, if the test includes the question “can you live with people?” then crows by their wonderful adaptability will come out at the head of the class.
--John Marzluff

(The illustration on the right--in which a crow rounds up a pack of dogs--is by Tony Angell, whose drawings and diagrams of mischievous and playful crows appear throughout Gifts of the Crow, a perfect complement to Marzluff's lively storytelling.)

Steven Pinker - The Better Angels of Our Nature: Why Violence Has Declined


Stanford University recently hosted Steven Pinker speaking about his most recent book, The Better Angels of Our Nature: Why Violence Has Declined. He offers a very hopeful story of human history - that we are less violent and more compassionate in general. I think there are issues with some of his conjectures (he is an evolutionary psychologist at heart, and I find that model highly problematic), but he offers a LOT of statistics to support his premise.

It's a nice talk - even if the video is a little sketchy (his slides are rarely shown).



Steven Pinker - The Better Angels of Our Nature: Why Violence Has Declined

(June 29, 2012) Steven Pinker argues that, contrary to popular belief, violence has declined over long stretches of time and today we may be living in the most peaceable era in our species's existence.

The Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford University held its first annual Behavioral Science Summit on the theme of Social Meets Science, highlighting the latest developments in the field in a conversational and interactive format.

Wednesday, July 25, 2012

Free Video Stream - The Buddhist Geeks Conference 2012 - August 9–11, 2012

The Buddhist Geeks Conference is right around the corner, and this year we can watch/listen online for free thanks to Sounds True. This year's conference runs from August 9-11 in Boulder, CO.


The Buddhist Geeks Conference 2012
August 9–11, 2012
One FREE video event

Not Your Normal Buddhist Conference
Discover the emerging new faces of Buddhism at the Buddhist Geeks Conference 2012 via live video streaming.

It’s an opportunity to explore the leading-edge frontiers of Buddhism, technology, and global culture. This year’s gathering brings together luminaries in the fields of Buddhism, science, philosophy, education, business, politics, and more. Participants will explore how the dharma is co-evolving with modern insights and trends to change our lives—personally and globally—in extraordinary and unexpected ways.

As Buddhist Geeks co-founder Vincent Horn describes it: “Some Buddhist events can be boring and predictable and tend to draw on the same group of speakers again and again. When we launched the conference last year, we wanted to create one with a completely fresh and innovative format, in which we could explore new topics in new ways. Feedback was very enthusiastic. This year’s conference in Boulder, Colorado, will build on what people liked the best from last year, while adding exciting new elements. We hope you’ll be joining us!”


Your live video streaming pass, available here exclusively through Sounds True, gives you up-close coverage of all featured events. Can’t make a live session? Don't worry! Recordings from live sessions will be available by 7 pm Eastern Time (GMT -4), two business days after the conference concludes.

Featured Speakers

Lama Surya Das: American Lama
Amber Case: Cyborg Anthropologist
Stephen Batchelor: Buddhist Author
Elizabeth Mattis-Namgyel: Teacher and Author
Matt Flannery: Kiva.org CEO
Martine Batchelor: Teacher and Author
Michael Stone: Buddhist Activist
Daniel Ingram: Practical Dharma Teacher
Willoughby Britton: Contemplative Scientist
Ken McLeod: Teacher – Executive Coach
Tami Simon: Sounds True CEO
Rohan Gunatillake: Meditation Entrepreneur
Stuart Davis: Bodhisattva Rocker
Vincent Horn: Buddhist Geek
Robert Spellman: Contemplative Artist
View Speaker Bios

Jacques Derrida On Religion


These two videos are really just audio, but the lectures are interesting for anyone who enjoys postmodern philosophy. Jacques Derrida was nearly a rock star as far as philosophers are concerned, which is rare in our day. Zizek has achieved nearly as much fame in recent years, but his is more to do with his outrageous behavior and commentaries (which is part of his philosophy) than Derrida's, which was due more to his ideas, especially as concerns justice.

In order to make sense of Derrida's position on religion - or anything else - it helps to understand his use of the word deconstruction and what the method means as he uses it. This is from a Wikipedia entry on Deconstruction, and there is more from the Stanford Encyclopedia of Religion below the videos.

From Différance to Deconstruction

Derrida approaches all texts as constructed around elemental oppositions which all speech has to articulate if it intends to make any sense whatsoever. This is so because identity is viewed in non-essentialist terms as a construct, and because constructs only produce meaning through the interplay of differences inside a "system of distinct signs". This approach to text, in a broad sense,[1][2] emerges from semiology advanced by Ferdinand de Saussure.

Saussure is considered one of the fathers of structuralism when he explained that terms get their meaning in reciprocal determination with other terms inside language
In language there are only differences. Even more important: a difference generally implies positive terms between which the difference is set up; but in language there are only differences without positive terms. Whether we take the signified or the signifier, language has neither ideas nor sounds that existed before the linguistic system, but only conceptual and phonic differences that have issued from the system. The idea or phonic substance that a sign contains is of less importance than the other signs that surround it. [...] A linguistic system is a series of differences of sound combined with a series of differences of ideas; but the pairing of a certain number of acoustical signs with as many cuts made from the mass thought engenders a system of values.[3]
Saussure explicitly suggested that linguistics was only a branch of a more general semiology, of a science of signs in general, being human codes only one among others. Nevertheless, in the end, as Derrida pointed out, he made of linguistics "the regulatory model", and "for essential, and essentially metaphysical, reasons had to privilege speech, and everything that links the sign to phone".[4] Derrida will prefer to follow the more "fruitful paths (formalization)" of a general semiotics without falling in what he considered "a hierarchizing teleology" privileging linguistics, and speak of 'mark' rather than of language, not as something restricted to mankind, but as prelinguistic, as the pure possibility of language, working every where there is a relation to something else.

Derrida then sees these differences, as elemental oppositions (0-1), working in all "languages", all "systems of distinct signs", all "codes", where terms don't have an"absolute" meaning, but can only get it from reciprocal determination with the other terms (1-0). This structural difference is the first component that Derrida will take into account when articulating the meaning of différance, a mark he felt the need to create and will become a fundamental tool in his life long work: deconstruction.[5]:
1) Différance is the systematic play of differences, of the traces of differences, of the spacing by means of which elements are related to each other. This spacing is the simultaneously active and passive (the a of différance indicates this indecision as concerns activity and passivity, that which cannot be governed by or distributed between the terms of these opposition) production of the intervals without which the "full" terms would not signify, would not function.
But structural difference will not be considered without him already destabilizing from the start its static, synchronic, taxonomic, ahistoric motifs, remembering that all structure already refers to the generative movement in the play of differences[6]

The other main component of différance is deferring, that takes into account the fact that meaning is not only a question of synchrony with all the other terms inside a structure, but also of diachrony, with everything that was said and will be said, in History, difference as structure and deffering as genesis.[7]:
2) "the a of différance also recalls that spacing is temporization, the detour and postponement by means of which intuition, perception, consummation - in a word, the relationship to the present, the reference to a present reality, to a being - are always deferred. Deferred by virtue of the very principle of difference which holds that an element functions and signifies, takes on or conveys meaning, only by referring to another past or future element in an economy of traces. This economic aspect of différance, which brings into play a certain not conscious calculation in a field of forces, is inseparable from the more narrowly semiotic aspect of différance.
This confirms the subject as not present to itself and constituted on becoming space, in temporizing and also, as Saussure said, that "language [which consists only of differences] is not a function of the speaking subject."[8] Questioned this myth of the presence of meaning in itself ("objective") and/or for itself ("subjective") Derrida will start a long deconstruction of all texts where conceptual oppositions are put to work in the actual construction of meaning and values based on the subordination of the movement of "differance"[7]:
At the point at which the concept of differance, and the chain attached to it, intervenes, all the conceptual oppositions of metaphysics (signifier/signified; sensible/intelligible; writing/speech; passivity/activity; etc.)- to the extent that they ultimately refer to the presence of something present (for example, in the form of the identity of the subject who is present for all his operations, present beneath every accident or event, self-present in its "living speech," in its enunciations, in the present objects and acts of its language, etc.)- become non pertinent. They all amount, at one moment or another, to a subordination of the movement of differance in favor of the presence of a value or a meaning supposedly antecedent to differance, more original than it, exceeding and governing it in the last analysis. This is still the presence of what we called above the "transcendental signified."
But, as Derrida also points out, these relations with other terms don’t express only meaning but also values. The way elemental oppositions are put to work in all texts it's not only a theoretical operation but also a practical option. The first task of deconstruction, starting with philosophy and afterwards revealing it operating in literary texts, juridical texts, etc, would be to overturn these oppositions[9]:
On the one hand, we must traverse a phase of overturning. To do justice to this necessity is to recognize that in a classical philosophical opposition we are not dealing with the peaceful coexistence of a vis-a-vis, but rather with a violent hierarchy. One of the two terms governs the other (axiologically, logically, etc.), or has the upper hand.

To deconstruct the opposition, first of all, is to overturn the hierarchy at a given moment. To overlook this phase of overturning is to forget the conflictual and subordinating structure of opposition.
It’s not that the final task of deconstruction is to surpass all oppositions, because they are structurally necessary to produce sense. They simply cannot be suspended once and for all. But this doesn’t mean that they don’t need to be analyzed and criticized in all its manifestations, showing the way these oppositions, both logical and axiological, are at work in all discourse for it to be able to produce meaning and values.[10]

And it’s not enough to deconstruction to expose the way oppositions work and how meaning and values are produced in speech of all kinds and stop there in a nihilistic or cynic position regarding all meaning, "thereby preventing any means of intervening in the field effectively".[11]
To be effective, deconstruction needs to create new concepts, not to synthesize the terms in opposition, but to mark their difference and eternal interplay[12]:
That being said - and on the other hand - to remain in this phase is still to operate on the terrain of and from within the deconstructed system. By means of this double, and precisely stratified, dislodged and dislodging, writing, we must also mark the interval between inversion, which brings low what was high, and the irruptive emergence of a new concept that no longer be, and never could be, included in the previous regime. If this interval, this biface or biphase, can be inscribed only in a bifurcated writing then it can only be marked in what I would call a grouped textual field: in the last analysis it is impossible to point it out, for a unilinear text, or a punctual position, an operation signed by a single author, are all by definition incapable of practicing this interval.
This explains why Derrida always proposes new terms in his deconstruction, not as a free play but as a pure necessity of analysis, to better mark the intervals:
Henceforth, in order better to mark this interval it has been necessary to analyze, to set to work, within the text of the history of philosophy, as well as within the so-called literary text (for example, Mallarme), certain marks, shall we say (I mentioned certain ones just now, there are many others), that by analogy (I underline) I have called undecidables, that is, unities of simulacrum, "false" verbal properties (nominal or semantic) that can no longer be included within philosophical (binary) opposition: but which., however, inhabit philosophical oppositions, resisting and organizing it, without ever constituting a third term, without ever leaving room for a solution in the form of speculative dialectics
Some examples of these new terms created by Derrida clearly exemplify the deconstruction procedure[12]:
  • (the pharmkon is neither remedy nor poison, neither good nor evil, neither the inside nor the outside, neither speech nor writing; 
  • the supplement is neither a plus nor a minus, neither an outside nor the complement of an inside, neither accident nor essence, etc.; 
  • the hymen is neither confusion nor distinction, neither identity nor difference, neither consummation nor virginity, neither the veil nor unveiled, neither inside nor the outside, etc.; 
  • the gram is neither a signifier nor a signified, neither a sign nor a thing, neither presence nor an absence, neither a position nor a negation, etc.;
    spacing is neither space nor time; 
  • the incision is neither the incised integrity of a beginning, or of a simple cutting into, nor simple secondary.
Nevertheless, perhaps Derrida's most famous mark was, from the start, differance, created to deconstruct the opposition between speech and writing and open the way to the rest of his approach:
and this holds first of all for a new concept of writing, that simultaneously provokes the overturning of the hierarchy speech/writing, and the entire system attached to it, and releases the dissonance of a writing within speech, thereby disorganizing the entire inherited order and invading the entire field
OK, then, so here is the introduction to the videos.


Jacques Derrida on Religion
Jacques Derrida was one of the most well known twentieth century philosophers. He was also one of the most prolific. Distancing himself from the various philosophical movements and traditions that preceded him on the French intellectual scene (phenomenology, existentialism, and structuralism), he developed a strategy called "deconstruction" in the mid 1960s. Although not purely negative, deconstruction is primarily concerned with something tantamount to a critique of the Western philosophical tradition. Deconstruction is generally presented via an analysis of specific texts. It seeks to expose, and then to subvert, the various binary oppositions that undergird our dominant ways of thinking—presence/absence, speech/writing, and so forth. (Internet Encyclopedia of Philosophy)

Part One:




Part Two:




From the Standford Encyclopedia of Philosophy entry on Derrida:

Deconstruction

As we said at the beginning, “deconstruction” is the most famous of Derrida's terms. He seems to have appropriated the term from Heidegger's use of “destruction” in Being and Time. But we can get a general sense of what Derrida means with deconstruction by recalling Descartes's First Meditation. There Descartes says that for a long time he has been making mistakes. The criticism of his former beliefs both mistaken and valid aims towards uncovering a “firm and permanent foundation.” The image of a foundation implies that the collection of his former beliefs resembles a building. In the First Meditation then, Descartes is in effect taking down this old building, “de-constructing” it. We have also seen how much Derrida is indebted to traditional transcendental philosophy which really starts here with Descartes' search for a “firm and permanent foundation.” But with Derrida, we know now, the foundation is not a unified self but a divisible limit between myself and myself as an other (auto-affection as hetero-affection: “origin-heterogeneous”).

Derrida has provided many definitions of deconstruction. But three definitions are classical. The first is early, being found in the 1971 interview “Positions” and in the 1972 Preface to Dissemination: deconstruction consists in “two phases” (Positions, pp. 41-42, Dissemination, pp.4-6). At this stage of his career Derrida famously (or infamously) speaks of “metaphysics” as if the Western philosophical tradition was monolithic and homogeneous. At times he also speaks of “Platonism,” as Nietzsche did. Simply, deconstruction is a criticism of Platonism, which is defined by the belief that existence is structured in terms of oppositions (separate substances or forms) and that the oppositions are hierarchical, with one side of the opposition being more valuable than the other. The first phase of deconstruction attacks this belief by reversing the Platonistic hierarchies: the hierarchies between the invisible or intelligible and the visible or sensible; between essence and appearance; between the soul and body; between living memory and rote memory; between mnēmē and hypomnēsis; between voice and writing; between finally good and evil. In order to clarify deconstruction's “two phases,” let us restrict ourselves to one specific opposition, the opposition between appearance and essence. Nietzsche had also criticized this opposition but it is clearly central to phenomenological thinking as well. So, in Platonism, essence is more valuable than appearance. In deconstruction however, we reverse this, making appearance more valuable than essence. How? Here we could resort to empiricist arguments (in Hume for example) that show that all knowledge of what we call essence depends on the experience of what appears. But then, this argumentation would imply that essence and appearance are not related to one another as separate oppositional poles. The argumentation in other words would show us that essence can be reduced down to a variation of appearances (involving the roles of memory and anticipation). The reduction is a reduction to what we can call “immanence,” which carries the sense of “within” or “in.” So, we would say that what we used to call essence is found in appearance, essence is mixed into appearance. Now, we can back track a bit in the history of Western metaphysics. On the basis of the reversal of the essence-appearance hierarchy and on the basis of the reduction to immanence, we can see that something like a decision (a perhaps impossible decision) must have been made at the beginning of the metaphysical tradition, a decision that instituted the hierarchy of essence-appearance and separated essence from appearance. This decision is what really defines Platonism or “metaphysics.” After this retrospection, we can turn now to a second step in the reversal-reduction of Platonism, which is the second “phase” of deconstruction. The previously inferior term must be re-inscribed as the “origin” or “resource” of the opposition and hierarchy itself. How would this re-inscription or redefinition of appearance work? Here we would have to return to the idea that every appearance or every experience is temporal. In the experience of the present, there is always a small difference between the moment of now-ness and the past and the future. (It is perhaps possible that Hume had already discovered this small difference when, in the Treatise, he speaks of the idea of relation.) In any case, this infinitesimal difference is not only a difference that is non-dualistic, but also it is a difference that is, as Derrida would say, “undecidable.” Although the minuscule difference is virtually unnoticeable in everyday common experience, when we in fact notice it, we cannot decide if we are experiencing the past or the present, if we are experiencing the present or the future. Insofar as the difference is undecidable, it destabilizes the original decision that instituted the hierarchy. After the redefinition of the previously inferior term, Derrida usually changes the term's orthography, for example, writing “différence” with an “a” as “différance” in order to indicate the change in its status. Différance (which is found in appearances when we recognize their temporal nature) then refers to the undecidable resource into which “metaphysics” “cut” in order to makes its decision. In “Positions,” Derrida calls names like “différance” “old names” or “paleonyms,” and there he also provides a list of these “old terms”: “pharmakon”; “supplement”; “hymen”; “gram”; “spacing”; and “incision” (Positions, p. 43). These names are old because, like the word “appearance” or the word “difference,” they have been used for centuries in the history of Western philosophy to refer to the inferior position in hierarchies. But now, they are being used to refer to the resource that has never had a name in “metaphysics”; they are being used to refer to the resource that is indeed “older” than the metaphysical decision.

This first definition of deconstruction as two phases gives way to the refinement we find in the “Force of Law” (which dates from 1989-1990). This second definition is less metaphysical and more political. In “Force of Law,” Derrida says that deconstruction is practiced in two styles (Deconstruction and the Possibility of Justice, p. 21). These “two styles” do not correspond to the “two phases” in the earlier definition of deconstruction. On the one hand, there is the genealogical style of deconstruction, which recalls the history of a concept or theme. Earlier in his career, in Of Grammatology, Derrida had laid out, for example, the history of the concept of writing. But now what is at issue is the history of justice. On the other hand, there is the more formalistic or structural style of deconstruction, which examines a-historical paradoxes or aporias. In “Force of Law,” Derrida lays out three aporias, although they all seem to be variants of one, an aporia concerning the unstable relation between law (the French term is “droit,” which also means “right”) and justice.

Derrida calls the first aporia, “the epoche of the rule” (Deconstruction and the Possibility of Justice, pp. 22-23). Our most common axiom in ethical or political thought is that to be just or unjust and to exercise justice, one must be free and responsible for one's actions and decisions. Here Derrida in effect is asking: what is freedom. On the one hand, freedom consists in following a rule; but in the case of justice, we would say that a judgment that simply followed the law was only right, not just. For a decision to be just, not only must a judge follow a rule but also he or she must “re-institute” it, in a new judgment. Thus a decision aiming at justice (a free decision) is both regulated and unregulated. The law must be conserved and also destroyed or suspended, suspension being the meaning of the word “epoche.” Each case is other, each decision is different and requires an absolutely unique interpretation which no existing coded rule can or ought to guarantee. If a judge programmatically follows a code, he or she is a “calculating machine.” Strict calculation or arbitrariness, one or the other is unjust, but they are both involved; thus, in the present, we cannot say that a judgment, a decision is just, purely just. For Derrida, the “re-institution” of the law in a unique decision is a kind of violence since it does not conform perfectly to the instituted codes; the law is always, according to Derrida, founded in violence. The violent re-institution of the law means that justice is impossible. Derrida calls the second aporia “the ghost of the undecidable (Deconstruction and the Possibility of Justice, pp. 24-26). A decision begins with the initiative to read, to interpret, and even to calculate. But to make such a decision, one must first of all experience what Derrida calls “undecidability.” One must experience that the case, being unique and singular, does not fit the established codes and therefore a decision about it seems to be impossible. The undecidable, for Derrida, is not mere oscillation between two significations. It is the experience of what, though foreign to the calculable and the rule, is still obligated. We are obligated – this is a kind of duty—to give oneself up to the impossible decision, while taking account of rules and law. As Derrida says, “A decision that did not go through the ordeal of the undecidable would not be a free decision, it would only be the programmable application or unfolding of a calculable process” (Deconstruction and the Possibility of Justice, p. 24). And once the ordeal is past (“if this ever happens,” as Derrida says), then the decision has again followed or given itself a rule and is no longer presently just. Justice therefore is always to come in the future, it is never present. There is apparently no moment during which a decision could be called presently and fully just. Either it has not a followed a rule, hence it is unjust; or it has followed a rule, which has no foundation, which makes it again unjust; or if it did follow a rule, it was calculated and again unjust since it did not respect the singularity of the case. This relentless injustice is why the ordeal of the undecidable is never past. It keeps coming back like a “phantom,” which “deconstructs from the inside every assurance of presence, and thus every criteriology that would assure us of the justice of the decision” (Deconstruction and the Possibility of Justice, pp. 24-25). Even though justice is impossible and therefore always to come in or from the future, justice is not, for Derrida, a Kantian ideal, which brings us to the third aporia. The third is called “the urgency that obstructs the horizon of knowledge” (Deconstruction and the Possibility of Justice, pp. 26-28). Derrida stresses the Greek etymology of the word “horizon”: “As its Greek name suggests, a horizon is both the opening and limit that defines an infinite progress or a period of waiting.” Justice, however, even though it is un-presentable, does not wait. A just decision is always required immediately. It cannot furnish itself with unlimited knowledge. The moment of decision itself remains a finite moment of urgency and precipitation. The instant of decision is then the moment of madness, acting in the night of non-knowledge and non-rule. Once again we have a moment of irruptive violence. This urgency is why justice has no horizon of expectation (either regulative or messianic). Justice remains an event yet to come. Perhaps one must always say “can-be” (the French word for “perhaps” is “peut-être,” which literally means “can be”) for justice. This ability for justice aims however towards what is impossible.

Even later in Derrida's career he will formalize, beyond these aporias, the nature of deconstruction. The third definition of deconstruction can be found in an essay from 2000 called “Et Cetera.” Here Derrida in fact presents the principle that defines deconstruction:
Each time that I say ‘deconstruction and X (regardless of the concept or the theme),’ this is the prelude to a very singular division that turns this X into, or rather makes appear in this X, an impossibility that becomes its proper and sole possibility, with the result that between the X as possible and the ‘same’ X as impossible, there is nothing but a relation of homonymy, a relation for which we have to provide an account…. For example, here referring myself to demonstrations I have already attempted …, gift, hospitality, death itself (and therefore so many other things) can be possible only as impossible, as the im-possible, that is, unconditionally (Deconstructions: a User's Guide, p. 300, my emphasis).
Even though the word “deconstruction” has been bandied about, we can see now the kind of thinking in which deconstruction engages. It is a kind of thinking that never finds itself at the end. Justice – this is undeniable – is impossible (perhaps justice is the “impossible”) and therefore it is necessary to make justice possible in countless ways.