Showing posts with label 13.7 Cosmos and Culture. Show all posts
Showing posts with label 13.7 Cosmos and Culture. Show all posts

Thursday, October 02, 2014

Barbara J King - Can Psychedelics Expand Our Consciousness?

From NPR's 13.7 Cosmos and Culture blog, Barbara J King takes issue with a passage in the new book from Sam Harris - Waking Up: A Guide to Spirituality Without Religion - in which he advocates for the use of psychedelics at least once in most adult lives, calling it a "rite of passage."

Ms. King enlists anthropologists Greg Downey and Daniel Lende, who co-blog at Neuroanthropology, to respond to the passages from Harris. Both are somewhat dismissive of the benefits of psychedelics for personal or spiritual growth. However, the each make the point that Harris's view represents an individual and isolated mind ingesting these substance, while these anthropologists believe the experience of psychedelics goes beyond their effects in the brain and are based more in socio-cultural context.

I kind of agree with them. 


Can Psychedelics Expand Our Consciousness?



by Barbara J. King
October 02, 2014

Geometry of the Soul.
Andrew Ostrovsky/iStockphoto

"One of the great responsibilities we have as a society is to educate ourselves, along with the next generation, about which substances are worth ingesting and for what purpose and which are not....If I knew that either of my daughters would eventually develop a fondness for methamphetamine or heroin, I might never sleep again. But if they don't try a psychedelic like psilocybin or LSD at least once in their adult lives, I will wonder whether they had missed one of the most important rites of passage a human being can experience."
Coming late in a new book by Sam Harris called Waking Up: A Guide to Spirituality Without Religion, this passage snapped me to attention. It's not that Harris's book had lulled me up to that point: It's a provocative, informative and, at times, infuriating look at consciousness and the self. Its main argument is that techniques exist, meditation prime among them, to reduce human suffering by helping us to understand that the self — as conventionally understood — is an illusion. Our feeling of "I" is a product of thought, and thoughts merely come and go in our consciousness; there's no self behind our eyes or in our head and when we grasp this, it's easier to unmoor ourselves from the sources of suffering in our lives.

The ways in which Harris supports this thesis are worth reading. Yet as a parent of a college-age daughter, I found that it was his move beyond meditation — Harris's expressed hope that his kids, once they become adults, will ingest psychedelics — that made me stop and think hard. Is Harris's wish an ethical one? What can my field of anthropology bring to bear in thinking about this matter?

On this topic of psychedelics, Harris has an advantage that I lack. Not only has he spent considerable time in serious meditative practice, he also has experienced moments of immense beauty and love — and other moments of total terror — on MDMA (ecstasy), psilocybin (mushrooms) and LSD. I grew up in the '60s in a family whose lives centered closely on law enforcement — my father was a captain in the New Jersey State Police — and I wasn't exactly the drug-experimenting type. In high school and college, I watched a few friends go through trips good and bad, but that's as close as I got.

Harris is candid about the risks of ingesting psychedelics:

"There is no getting around the role of luck here. If you are lucky, and you take the right drug, you will know what it is to be enlightened (or to be close enough to persuade you that enlightenment is possible). If you are unlucky, you will know what it is to be clinically insane."
Harris describes one LSD trip as plunging him into "a continuous shattering and terror for which I have no words."

Some readers, Harris notes at the outset, may want to consult their mental-health professionals before carrying out any of the ideas he endorses (including meditation), and he concludes that after expanding one's consciousness through drugs "it seems wise" to find other practices that "do not present the same risks."

So how should we think about the psychedelic-ingestion experience in connection with a search for enlightenment? Research in neuroscience certainly shows real change in the brain from the action of psychedelic drugs. But I don't think it's enough to say that the outcome of any given trip is a matter of which drug one ingests — and of individual luck.

Like everything else humans do, ingesting psychedelics — even if we are totally alone while doing so — is a cultural matter, and the outcomes are culturally contingent. Anthropologists Greg Downey and Daniel Lende, who co-blog at Neuroanthropology, each made the same point to me in separate emails this week when I invited them to respond to Harris's passages about enlightenment through psychedelic drug use: "One could say that Harris goes a bit far," were Lende's words. He continued:

"Certainly taking 100[micrograms] of LSD will produce a big pharmacological effect on the person; whether that relates to some understanding of personal significance is a more open book. Many anthropologists would say that he's over-emphasizing the individual view of things, in line with Western approaches to the mind. Put differently, the link between psychoactive effect and meaning is mediated by the immediate context, personal history, the framing given to the use, and larger cultural patterns.
I'm sure there are people who have rather muted responses to psychedelics, or exaggerated responses, and part of that will lie in the person's biology, from genetics to states of arousal to how they've learned to interpret psychedelic experiences.
And as I might put it as an anthropologist, experiences of spirituality can also be had through engaging with others, for example, talking to someone who has had psychosis (rather than Harris's exaggerated view of it, likely not grounded in personal experience but cultural ideas) or talking about transformative and spiritual experiences with others."
Downey made the point to me that no intoxicant has a predictable response:

"Across cultures, intoxicants of all sorts have quite different effects: even alcohol has no uniform effect. In some places, it leads groups of people to fight, in others to cry to themselves, in others still, to raucous singing or dancing. Although the chemical may have a specific effect — such as emotional disinhibition or visions — the effect in the brain does not necessarily produce the same emotional or perceptual effect. Hallucinogens may make us prone to have visions, but they do not determine entirely what sort of visions they will be, or whether they will be a profound and life changing event, or a temporary intoxication.
To really have the sorts of effects that Sam Harris hopes for, you would need the symbolic resources and social support to leverage the hours of intoxication into serious insight. In a society hostile to visions, convinced that there is no higher form of consciousness than everyday awareness, I think it's unlikely that most users would have the sorts of experiences that Harris describes. Some will, and they show us what's possible; but they do not point the way to anything inevitable."
It's hard to escape our own cultural lens: It's a constant struggle, for anthropologists as much as anyone. Still, Harris's perspective would, I think, be strengthened by his explicitly considering the variable cultural contexts for what he's espousing.

"It is your mind, rather than circumstances themselves, that determines the quality of your life," he writes, then repeats this sentiment in similar words throughout the book.

Does Harris really think that mushrooms and meditation are enough to overcome, to take but one example, a life of hunger or poor health emerging from poverty? Of course not. Harris is smarter than that — but he's writing for a certain audience, and what comes to the fore is not a global perspective on human suffering or on what society should do about it.

Circling back to the passage from Waking Up that I used to open this post, I'll wager that we can collectively create a lengthy list of responsibilities that our society has — to each other, to our environment, to other animals — that take priority over eating mushrooms or dropping acid and urging our adult children to do so.




Barbara's most recent book on animals was released in paperback in April. You can keep up with what she is thinking on Twitter: @bjkingape

Wednesday, July 02, 2014

3 Things Everyone Should Know Before Growing Up (NPR)

This is a nice post from NPR's 13.7 Cosmos and Culture blog - numbers one and three would have helped me immeasurably if someone had shared that wisdom. Number two I would ignored, but it has become a cornerstone of how I live my life. There is IQ (what the tests measure) and then there is intelligence, maybe better known as wisdom (knowledge + experience).

3 Things Everyone Should Know Before Growing Up

by TANIA LOMBROZO
June 30, 2014


We take it for granted that children should play. Why not adults? iStockphoto

With peak graduation season just behind us, we've all had the chance to hear and learn from commencement speeches — without even needing to attend a graduation. They're often full of useful advice for the future as seniors move on from high school and college. But what about the stuff you wish you'd been told long before graduation?

Here are just three of the many things I wish I'd known in high school, accumulated at various points along the way to becoming a professor of psychology.

1. People don't judge you as harshly as you think they do.

In a 2001 study, psychologists Kenneth Savitsky, Nicholas Epley and Thomas Gilovich asked college students to consider various social blunders: accidentally setting off the alarm at the library, being the sole guest at a party who failed to bring a gift or being spotted by classmates at the mall while carrying a shopping bag from an unfashionable store. Some students imagined experiencing these awkward moments themselves — let's call them the "offenders" — while others considered how they, or another observer, would respond watching someone else do so. We'll call them the "observers."

The researchers found that offenders thought they'd be judged much more harshly than the observers actually judged people for those offenses. In other words, observers were more charitable than offenders thought they would be.

In another study, students who attempted a difficult set of anagrams thought observers' perception of their intellectual ability would plummet. In fact, observers' opinions hardly shifted at all.

Why do we expect others to judge us more harshly than they do?

One of the main reasons seems to be our obsessive focus on ourselves and our own blunders. If you fail to bring a gift to a party, you might feel embarrassed and focus exclusively on that single bit of information about you. In contrast, other people will form an impression of you based on lots of different sources of information, including your nice smile and your witty banter. They'll also have plenty to keep them occupied besides you: enjoying a conversation, taking in the view, planning their evening or worrying about the impression that they are making. We don't loom nearly as large in other people's narratives as we do in our own.

Now, it isn't the case that others are always charitable. Sometimes they do judge us harshly. What the studies find is that others judge us less harshly than we think they will. But that should be enough to provide some solace. We can take it as an invitation to worry less about what others think of us and as a reminder to be generous in how we judge them.

2. You should think of intelligence as something you develop.

Is a person's intelligence a fixed quantity they're born with? Or is it something malleable, something that can change throughout the lifespan?

The answer is probably a bit of both. But a large body of research suggests you're better off thinking of intelligence as something that can grow — a skill you can develop — and not as something set in stone. Psychologist Carol Dweck and her colleagues have been studying implicit theories or "mindsets" about intelligence for decades, and they find that mindset really matters. People who have a "growth mindset" typically do better in school and beyond than those with a "fixed mindset."

One reason mindset is so important is because it affects how people respond to feedback.

Suppose George and Francine both do poorly on a math test. George has a growth mindset, so he thinks to himself: "I'd better do something to improve my mathematical ability. Next time I'll do more practice problems!" Francine has a fixed mindset, so she thinks to herself: "I guess I'm no good at math. Next time I won't bother with the honors course!" And when George and Francine are given the option of trying to solve a hard problem for extra credit, George will see it as an attractive invitation to grow his mathematical intelligence and Francine as an unwelcome opportunity to confirm she's no good at math.

Small differences in how George and Francine respond will, over time, generate big differences in the experiences they expose themselves to, their attitude toward math and the proficiency they ultimately achieve. (The gendered name choices here are not accidental: Girls often have a fixed mindset when it comes to mathematical ability; mindset probably accounts for some of the gender gap in girls' and boys' performance in mathematics in later school years.)

The good news is that mindsets are themselves malleable. Praising children's effort rather than their intelligence, for example, can help instill a growth mindset. And simply reading about the brain's plasticity might be enough to shift people's mindsets and generate beneficial effects.

That's enough to convince me that whether or not intelligence is malleable, our skills and achievements — the things we do with our intelligence — certainly are. Let's do what we can to "grow" them.

3. Playing isn't a waste of time.

We take it for granted that children can and should play. By adulthood, that outlook is expected to give way as we make time for more "mature" preoccupations. In her recent book Overwhelmed: Work, Love, and Play When No One Has the Time, Brigid Schulte takes a close look at how American adults spend their leisure time. She isn't too impressed: We don't have much of it (especially women and especially mothers), and we don't enjoy it as much as we could.

Young adults are somewhere in the transition: too old for "child's play" and not yet into adulthood. But the lesson from psychology is that there's a role for play at all ages, whether it's elaborate games of make-believe, rule-based games, unstructured summer playtime or forms of "higher culture," like art, music and literature. Playing is a way to learn about ourselves and about the world. Playing brings with it a host of emotional benefits.

Play is joyful in part because it's an end in itself. It's thus perhaps ironic (but fortuitous) that play is also a means to greater wellbeing and productivity, even outside the playroom. So make time for play; it's not something to outgrow.

Finally, if you're in search of more advice, check out NPR's collection of more than 300 commencement addresses, covering 1774 to the present.

Sunday, June 08, 2014

Alva Noë - 'Rosemary's Baby' Thrills With Unfathomable Mystery

 

This is a very cool article in which philosopher Alva Noë looks at one of my favorite films through a perspective I  have never taken or even considered - the film is about the act of coming to realize that something is wrong. It's clear something is wrong long before Rosemary is able to accept that the people she trusts are not who they say they are.

And that realization brings in the more important theme of "can we ever truly know another person?" In essence, this is about theory of mind. Except that here, as in reality, we know that others have an interior world to which we are not privy, we "can't get inside their heads to learn what they really think and feel. We are always at a remove from the other."

This article comes from NPR's 13.7 Cosmos and Culture blog.

'Rosemary's Baby' Thrills With Unfathomable Mystery

by ALVA NOË | June 07, 2014

I watched Rosemary's Baby, by Roman Polanksi, again last night. It is a monster movie. And like the best movies in this genre, you could call it a skepticism movie. It is philosophical. And, remarkably, it is terrifying because it is philosophical.


Mia Farrow in Rosemary's Baby. The Kobal Collection/Paramount

Things aren't going right with young Rosemary. Her husband is distant, removed, self-centered; he is unkind and even brutal with her; he spends his free time with their new neighbors, an odd, elderly couple next door. Rosemary's pregnancy is difficult; she has pains continually and is losing weight; neither her doctor nor her husband seem to be interested in helping. Rosemary is in a stupor, as if she were under the influence of drugs.

She has their new apartment painted brightly to lighten it up. But this does little to dispel the dark in its halls and rooms. The only bright spot in this dim scenario is that Guy, who is an actor, has had a turn of good luck at work. His rival woke up blind and Guy got the big part. Professional success is around the corner.

Rosemary's Baby is a story about coming to recognize that something is wrong. At first Rosemary resists this conclusion. What could be wrong? Occasionally pregnancies are painful. Husbands get caught up with pressures and demands at work. Life isn't supposed to play out like a fairy tale. She resolves herself to be a better wife, to reach out to Guy and help him be more open in their relationship.

Now the character of the movie's skepticism shifts to one of philosophy's enduring skeptical concerns: the very possibility of knowing other people and what they think and feel.

Philosophers have long noticed that that there is room for doubt in this domain: all we ever really know, when it comes to others, is what they say and do. We can't get inside their heads to learn what they really think and feel. We are always at a remove from the other. It is also clear, to philosophy and to us all, that this intellectual worry can be safely set aside in the course of our ordinary lives. Questions about what those around us are thinking and feeling and doing, let alone questions about whether they have inner lives at all, don't seriously arise. Which itself raises a philosophical question: if the basis of our knowledge is so sleight, why is our confidence in other minds so robust? (I explore this in my book Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness.)

Sometimes these sorts of theoretical worries achieve practical prominence in the setting of neurological trauma when we are confronted by, for example, persistent vegetative state, and must make decisions about what is going on inside the mind of an badly injured person.

But Rosemary's Baby addresses these questions in, what, if possible, is an even more terrifying way. It gradually becomes clear, to Rosemary, and to us, the audience, that we can no longer trust Guy, or the neighbors, or the doctor, or just about anyone else in Rosemary's life. Skepticism about the thoughts and feelings of those around Rosemary is now a hypothesis that must be taken seriously. Her life, and that of her baby, depends on it. She is the victim, it seems, of an elaborate plot. And (almost everyone), even those most close to her, is in on it.

Is this not madness? It certainly has that look and feel. How could everyone be in on it? This, then, ratchets the movie's skeptical theme up a notch: could it be that she is hallucinating or confabulating the whole thing? Could this be some sort of pre-partem hysteria? Can she, do we, know what is real?

But Rosemary's Baby is not just a psychological thriller. It is a monster movie. And what makes Rosemary's predicament so very difficult is the fact that what is really going on, so we come to believe, what is really driving events in this film, is so unlikely, so impossible, so unthinkable, as to rule out the possibility of anything like a straight forward "figuring out" of what's happening.

Satan himself has come to Earth and raped Rosemary, with the assistance of her husband and almost everyone else she knows. This is too far-fetched to be true. It is too far-fetched to be even thinkable.

The distinctive charms and fascinations of horror films arise at this kind of juncture, according to the noted philosopher of art Noël Carroll. It is the hallmark of all narrative forms they they supply us with cognitive delights. Plot intrigues; we are curious; curiosity motivates us to follow the story, to figure out what's going on, to understand how forces at play in a situation drive the action inexorably forward. Plot is cognitive and the pleasure of story arises from the achievement getting it.

I think Carroll is right about this. The basic idea was anticipated already in Aristotle's treatment of tragedy. Plot, Aristotle argued, is the life and soul of tragedy. And plot is concerned not with mere event, not with one thing happening after another. But with human action. So to tell a good story, or to enjoy one in the audience, you need to be sensitive to what makes an action significant in the setting of a human life. You need to be a student of human nature and experience.

It is because the meaning and importance of a work of dramatic fiction comes in the exploration of ideas about human experience that it is possible to enjoy a play just by reading it. It isn't spectacle that moves us; for Aristotle, it's understanding. Which doesn't mean that one does not also enjoy felt or emotional responses to the story. A tragedy, Aristotle thought, always aims to arouse fear and pity. But it doesn't aim to produce emotion the way a ride on a roller coaster produces a sense of danger. Fear is not merely an effect on us or in us. It is an expression of our sensitivity to what is playing out in the story and so it is itself an achievement of understanding and insight.

Now the distinctive difference between horror and other genres — this is Carroll's argument — is that the heart of the horror genre is a monstrous phenomenon that actually, truly, makes no sense. Monsters are unfathomable. They are unknowable. They are betwixt and between. Neither alive nor dead, neither human or animal, neither natural nor, really, unnatural. They are, as Carroll puts it, interstitial. The point is that there is no understanding of the monstrous. There is no genuine satisfaction of our curiosity.

A good horror movie, I would say, then, is a kind of paradox in itself. It engages you in a mystery whose intrinsic character rules out, or threatens to rule out, its resolution. And it is the distinct feature of art horror — as opposed to what might horrify us in real life — that it affords the opportunity for philosophical engagement with the unresolvable. From this point of view, the fact that we find the monster scary is secondary. We don't like horror movies because we like to feel negative emotions. It is, rather, that the negative emotions are outweighed by the philosophical delights.

I'm not sure whether this account does justice to Rosemary's Baby. Perhaps its real fright stems from its suggestion that the philosopher's unresolved skeptical puzzles about the limits of knowledge of others and the world around us reveal the underlying reality of our condition, the fact of our total, absolute, abject, terrifying isolation. But is that something we take pleasure in discovering?

~ You can keep up with more of what Alva Noë is thinking on Facebook and on Twitter: @alvanoe.

Tuesday, November 12, 2013

Huh? What If One Word Could Unite the World - Alva Noë

From NPR's 13.7 Cosmos and Culture blog, philosopher Alva Noë reports on a new study that suggests there is, indeed, a word common to all languages, the word huh?

The article appeared in the Open Access journal, PLoS ONE, and was titled, "Is “Huh?” a Universal Word? Conversational Infrastructure and the Convergent Evolution of Linguistic Items." The abstract appears below this article.

Could One Word Unite The World?


by Alva Noë
November 11, 2013

The word for milk in German is "Milch." In French it is "lait." Two quite different words — Milch, lait — for one thing. This is the basic observation that supports the linguistic principle that the relation between words and their meanings is arbitrary. You can't read the meaning off the word. And what a word means doesn't determine or shape the word itself. The bottom line: you need to learn words.

And that's why you don't find the same words in every language. Sameness of word implies a shared history. No shared history, no shared words. English and German share the word for milk (German "Milch"), but that's because German and English share a common history. And there are words like "OK" that have pretty wide circulation but only thanks to globalization and the influence of English.

It would be astonishing if there was a word — or a group of words — that was actually native to all languages.

This is precisely the claim made in a fascinating by Mark Dingemanse and his colleagues at the Max Planck Institute for Psycholinguistics in Nijmengen, Holland, published this past Friday in PloS One.

"Huh?" — as in, huh? what did you say? — it is claimed, is a universal word. It occurs in every language (or in some suitably large sample of unrelated languages).

They do not claim "huh?" occurs in exactly the same form in all languages. Think "Milch" and "milk." A certain amount of variation is consistent with word identity, not only across languages, but within language. Some English speakers say "mulk," others "melk," and so on. And so for this case.

In the case of "huh?" there are other kinds of differences too. It's a question word, and different languages use different prosody to mark the interrogative mood (e.g. some languages, like English, use rising intonation, whereas others, like Icelandic, use falling).

Exactly how "huh?" gets said varies from language to language.

Which turns out to be a crucial, for it rules out a natural objection to the claim of universality. "Huh?" is universal, it might be said, because it isn't a word! It isn't the sort of sound that needs to be learned. You don't need to learn to sneeze, or grunt. You don't need to learn to jump when you are startled. "Huh?" must be like this.

But you do need to learn to say "huh?" in just the ways we need to learn the word for milk and ask questions. "Huh?" is not only universal, like sneezing, it is a word, like "milk."

This brings us to the central puzzle the authors face: given that you need to learn words, and that meanings don't fix the sound, shape or character of the words we use to express them, and given that linguistic cultures are diverse and unrelated, how could there be universal words?

The authors' proposal is startling. I reserve judgment on whether it's right or not.

Their basic claim is that this is an example of what in biology is called convergent evolution. Sometimes lineages that are unrelated evolve the same traits as adaptations to the same environmental conditions. Evolution in cases such as this converges. And that, according to the authors, is what's going on here. It turns out that every language faces the "huh?" problem. That is, every language needs a way to for a listener to signal to the speaker that the message has not been received. (Every language needs what the authors call a mechanism for "Other-Initiated Repair.") Why? Because where there is communication there is liable to be miscommunication. Just as missing balls comes with playing catching, so not hearing, or not understanding what you hear, not getting it, goes with speech. Where there is a speech you need a way to say: huh?

Their bold claim is that only interjections that sound roughly like "huh?" can do this. "huh?" is so optimal — it's short, easy to produce, easy to hear, capable of carrying a questioning tone, and so on — that every human language has stumbled upon it as a solution.

Is sounding the same and doing the same communicative job enough to make these all instances of the same word?

Hmm.

You can keep up with more of what Alva Noë is thinking on Facebook and on Twitter: @alvanoe
* * * * *

Is “Huh?” a Universal Word? Conversational Infrastructure and the Convergent Evolution of Linguistic Items


Mark Dingemanse, Francisco Torreira, N. J. Enfield

Abstract


A word like Huh?–used as a repair initiator when, for example, one has not clearly heard what someone just said– is found in roughly the same form and function in spoken languages across the globe. We investigate it in naturally occurring conversations in ten languages and present evidence and arguments for two distinct claims: that Huh? is universal, and that it is a word. In support of the first, we show that the similarities in form and function of this interjection across languages are much greater than expected by chance. In support of the second claim we show that it is a lexical, conventionalised form that has to be learnt, unlike grunts or emotional cries. We discuss possible reasons for the cross-linguistic similarity and propose an account in terms of convergent evolution. Huh? is a universal word not because it is innate but because it is shaped by selective pressures in an interactional environment that all languages share: that of other-initiated repair. Our proposal enhances evolutionary models of language change by suggesting that conversational infrastructure can drive the convergent cultural evolution of linguistic items.

Full Citation: 
Dingemanse M, Torreira F, Enfield NJ. (2013, Nov 8). Is “Huh?” a Universal Word? Conversational Infrastructure and the Convergent Evolution of Linguistic Items. PLoS ONE 8(11): e78273. doi:10.1371/journal.pone.0078273

Friday, August 09, 2013

Marcelo Gleiser - The Nature Of Consciousness: A Question Without An Answer?

It's a little strange to read Marcelo Gleiser (a professor of physics and astronomy) riffing about consciousness at NPR's 13.7 Cosmos and Culture blog, since this tends to be the domain of philosopher Alva Noë on that blog, but it is an interesting article.

Gleiser is thinking hard about David Chalmers' "Hard Problem of Consciousness," i.e., what it is like to be conscious and how that happens from the 3.5 lb lump of fatty tissue in our skulls. He rejects the notion that consciousness is a by-product of neuronal activity, as well as the computational model (that we can simulate the brain and mind with a computer):
it becomes very hard to see how the subjective quality of the experiential mind will emerge from neuronal modeling in silicon chips: to capture thinking is not the same thing as capturing what the thinking is about.
I agree.

The Nature Of Consciousness: A Question Without An Answer?


by MARCELO GLEISER
August 07, 2013

How does our subjective reality emerge from the physical structures of the brain and body?
iStockphoto.com

Today I'd like to go back to a topic that leaves most people perplexed, me included: the nature of consciousness and how it "emerges" in our brains. I wrote about this a few months ago, promising to get back to it. At this point, no scientist or philosopher in the world knows how to answer it. If you think you know the answer, you probably don't understand the question:

Are you all matter?

Or, let's phrase it in a different way, a little less controversial and more amenable to a scientific discussion: how does the brain, a network of some 90 billion neurons, generate the subjective experience you have of being you?

Australian philosopher David Chalmers, now at New York University, dubbed this question "The Hard Problem of Consciousness." He did this to differentiate it from other problems, which he considers the "easy" ones, that is, those that can be solved through the diligent application of scientific research and methodology as it's being already done in cognitive neurosciences and in computational neuroscience. Even if some of these "easy" problems may take a century to solve, their difficulty doesn't even come close to that of the "hard" problem, which, some speculate, may be insoluble.

Note that, even if the hard problem may be insoluble, the majority of scientists and philosophers still stick to the hypothesis that matter is all there is and that "you" exist as a neuronal construction within your brain (and body, as the two are linked in many ways, not all understood yet).

Here are some of the problems Chalmers calls easy:
  • The ability to discriminate and react to external stimuli
  • The integration of sensorial information
  • The difference between a state of wakefulness and sleep
  • The intentional control of behavior
These questions are on the whole localized, amenable to a reductionist description of how specific parts of the brain operate as electrochemical circuitry through myriad neural connections.

Recently, Henry Markram from the Federal Polytechnic School in Lausanne, Switzerland, received a billion-euro grant to lead the Human Brain Project, a consortium of more than a dozen European institutions that intends to create a full-blown simulation of the human brain. For this, they will need a supercomputer capable of more than a billion-billion operations per second (exaflops, where "exa" stands for 1018), about 50 times faster than today's high-end machines. Optimists believe that such computing power is within reach, possibly before the end of this decade.

Of course, Markram's project, or the intent of modeling a human brain in full in a computer, clashes frontally with the notion of the hard problem.

Markram and the "computationalists" believe that if the simulation is sufficiently complete and detailed, including everything from the flow of neurotransmitters across each individual synapse to the amazingly complex network of the trillions of inter-synaptic connections across the brain tissue, that it will function just as a human brain does, including a consciousness in every way as amazing as ours. To them, the hard problem doesn't exist: everything can be obtained from pilling neuron upon neuron on computer chip models, as bricks compose a house, plus all the other building details, plumbing, wiring, etc.

Although we must agree that Markram's project is of enormous scientific importance, I can't quite see how a computer simulation can create something like a human consciousness. Perhaps some other kind of consciousness, but not ours.

Another philosopher from New York University (that ought to be an amazing department to work in), Thomas Nagel, argued that we are incapable of understanding what it is like to be another animal, with its own subjective experience. He took bats as an example, probably because they construct their sense of reality through echolocation and are so different from us. Using ideas from MIT linguist Noam Chomsky, who has argued that every brain has cognitive limitations stemming from its design and evolutionary functionality (for example, a mouse will never talk), Nagel showed that we will never truly understand what it is like to be a bat.

This is another way of thinking about Chalmers' hard problem, what philosopher Colin McGinn calls "cognitive closure." (McGinn has just left the University of Miami after much controversy. Who knows, maybe he will also join NYU's philosophy department?)

Back to McGinn's ideas, he and other "mysterians" defend the idea that our brains can only do so much and one of the things that it can't do is understand the nature of consciousness. Being a philosophical argument there is of course no scientific proof of this limitation (what physicists fondly call a "no-go theorem"), but McGinn makes a compelling case, arguing that the difficulty comes from consciousness being nowhere and everywhere in the brain, thus not amenable to the methodic reductionist analysis as we tend to do with scientific issues.

This being the case, it becomes very hard to see how the subjective quality of the experiential mind will emerge from neuronal modeling in silicon chips: to capture thinking is not the same thing as capturing what the thinking is about.

McGinn leaves the door open to more advanced intelligences, with brains designed in more capable ways than ours. Of course, unless you are Ray Kurzweil and are convinced that it is just a matter of time before machines will be able to not just simulate the mind but leave us all behind, we can't ever predict reliably whether such technological marvels will come to be. But even if a more advanced (machine?) intelligence one day figures out what consciousness is about, it seems that for today we will have to continue living with the mystery of not knowing.