Wednesday, July 02, 2014

Free Will: Is Your Brain the Boss of You?

From Scientific American, this is a short but cool discussion between Joseph LeDoux and Michael Gazzaniga on the topic of free will - Gazzaniga wrote an excellent book on the topic, Who's in Charge?: Free Will and the Science of the Brain (2011).

Free Will: Is Your Brain the Boss of You? [Video]

By Mark Fischetti | June 30, 2014

The views expressed are those of the author and are not necessarily those of Scientific American.

Michael Gazzaniga

Philosophers have debated for years whether we deliberately make each of the many decisions we make every day, or if our brain does it for us, on autopilot. Neuroscientists have shown, for example, that neurons in the brain initiate our response to various stimuli milliseconds before we’re even aware that we’re taking such an action.

This heady debate has hit a very practical road in the past decade: whether individuals who commit crimes are actually responsible for them. Lawyers have argued in court that if the brain determines the mind, then defendants may not be responsible for their transgressions.

Michael Gazzaniga, director of the SAGE Center for the Study of Mind at the University of California, Santa Barbara, is at the forefront of the research into free will, and its implications in courtroom trials and in the expectations of different societies. His thoughts and proclamations are captured in an engaging video called Free Will, created by Joseph LeDoux, a well-known expert on the emotional brain at New York University. The video is the second in a series he is putting together with director Alexis Gambis called My Mind’s Eye. (The first episode featured Ned Block on the mind-body problem.) They have given Scientific American the chance to post these videos first, on our site.


How Free Is Your Will? An interview with Michael Gazzaniga from Imaginal Disc on Vimeo.

The conversation between LeDoux and Gazzaniga (who is also an editorial adviser to Scientific American) runs about 12 minutes, and Gambis has inserted some compelling imagery, including clips of creepy old movies in which scientists probe the brains of live people. The film then morphs into a four-minute music video of the song “How Free Is Your Will?,” performed by LeDoux’s band, the Amygdaloids. A few highlights of the interview, offered freely, by me:

2:55 Split-brain patients. Gazzaniga explains what has happened to these people, and what they can teach us about how we make decisions.

6:50 Personal responsibility. This is the crux of the argument about whether responsibility for our actions lies in the neuronal structures of the brain or in our conscious minds, and whether biological mechanisms or society’s norms are most important in defining acceptable behavior. “There’s no reason to not hold people accountable for their actions,” Gazzaniga maintains. He then discusses how society can more intelligently decide on what to do with people who violate its rules.

9:05 Criminal trials. Gazzaniga discusses problems in referring to brain scans in courtroom trials and in sentencing people who are convicted, and considers the effectiveness of capital punishment, given what we know about free will.

12:00 Music video of “How Free Is Your Will?”

Further reading, suggested by Michael Gazzaniga:
  • For the Law, Neuroscience Changes Nothing and Everything. Joshua Greene and Jonathan Cohen in Philosophical Transactions of the Royal Society B, 359, 1775-1785; 2004.
  • The Law and Neuroscience, Michael S. Gazzaniga in Neuron, 60:412-415; 2009.
  • Who’s in Charge? Free Will and the Science of the Brain. Michael S. Gazzaniga. Ecco, Harper Collins, New York; 2011.
  • Neuroscience in the Courtroom, Michael S. Gazzaniga in Scientific American, April 2011.
  • A Primer on Criminal Law and Neuroscience: A Contribution of the Law and Neuroscience Project. Oxford Series in Neuroscience, Law, and Philosophy; eds. Stephen J. Morse and Adina L. Roskies; 2013.

About the Author: Mark Fischetti is a senior editor at Scientific American who covers energy, environment and sustainability issues. Follow on Twitter @markfischetti.

Tuesday, July 01, 2014

Nancy Andreasen - Secrets of the Creative Brain


Here is an excellent article on creativity and the brain from The Atlantic. This is one of the more comprehensive and interesting articles on creativity I have seen in the mainstream press. It's well worth the read.

Secrets of the Creative Brain

A leading neuroscientist who has spent decades studying creativity shares her research on where genius comes from, whether it is dependent on high IQ—and why it is so often accompanied by mental illness.

Nancy Andreasen | June 25, 2014

AS A PSYCHIATRIST and neuroscientist who studies creativity, I’ve had the pleasure of working with many gifted and high-profile subjects over the years, but Kurt Vonnegut—dear, funny, eccentric, lovable, tormented Kurt Vonnegut—will always be one of my favorites. Kurt was a faculty member at the Iowa Writers’ Workshop in the 1960s, and participated in the first big study I did as a member of the university’s psychiatry department. I was examining the anecdotal link between creativity and mental illness, and Kurt was an excellent case study.

He was intermittently depressed, but that was only the beginning. His mother had suffered from depression and committed suicide on Mother’s Day, when Kurt was 21 and home on military leave during World War II. His son, Mark, was originally diagnosed with schizophrenia but may actually have bipolar disorder. (Mark, who is a practicing physician, recounts his experiences in two books, The Eden Express and Just Like Someone Without Mental Illness Only More So, in which he reveals that many family members struggled with psychiatric problems. “My mother, my cousins, and my sisters weren’t doing so great,” he writes. “We had eating disorders, co-dependency, outstanding warrants, drug and alcohol problems, dating and employment problems, and other ‘issues.’ ”)

While mental illness clearly runs in the Vonnegut family, so, I found, does creativity. Kurt’s father was a gifted architect, and his older brother Bernard was a talented physical chemist and inventor who possessed 28 patents. Mark is a writer, and both of Kurt’s daughters are visual artists. Kurt’s work, of course, needs no introduction.

For many of my subjects from that first study—all writers associated with the Iowa Writers’ Workshop—mental illness and creativity went hand in hand. This link is not surprising. The archetype of the mad genius dates back to at least classical times, when Aristotle noted, “Those who have been eminent in philosophy, politics, poetry, and the arts have all had tendencies toward melancholia.” This pattern is a recurring theme in Shakespeare’s plays, such as when Theseus, in A Midsummer Night’s Dream, observes, “The lunatic, the lover, and the poet / Are of imagination all compact.” John Dryden made a similar point in a heroic couplet: “Great wits are sure to madness near allied, / And thin partitions do their bounds divide.”

Compared with many of history’s creative luminaries, Vonnegut, who died of natural causes, got off relatively easy. Among those who ended up losing their battles with mental illness through suicide are Virginia Woolf, Ernest Hemingway, Vincent van Gogh, John Berryman, Hart Crane, Mark Rothko, Diane Arbus, Anne Sexton, and Arshile Gorky.

My interest in this pattern is rooted in my dual identities as a scientist and a literary scholar. In an early parallel with Sylvia Plath, a writer I admired, I studied literature at Radcliffe and then went to Oxford on a Fulbright scholarship; she studied literature at Smith and attended Cambridge on a Fulbright. Then our paths diverged, and she joined the tragic list above. My curiosity about our different outcomes has shaped my career. I earned a doctorate in literature in 1963 and joined the faculty of the University of Iowa to teach Renaissance literature. At the time, I was the first woman the university’s English department had ever hired into a tenure-track position, and so I was careful to publish under the gender-neutral name of N. J. C. Andreasen.

Not long after this, a book I’d written about the poet John Donne was accepted for publication by Princeton University Press. Instead of feeling elated, I felt almost ashamed and self-indulgent. Who would this book help? What if I channeled the effort and energy I’d invested in it into a career that might save people’s lives? Within a month, I made the decision to become a research scientist, perhaps a medical doctor. I entered the University of Iowa’s medical school, in a class that included only five other women, and began working with patients suffering from schizophrenia and mood disorders. I was drawn to psychiatry because at its core is the most interesting and complex organ in the human body: the brain.

I have spent much of my career focusing on the neuroscience of mental illness, but in recent decades I’ve also focused on what we might call the science of genius, trying to discern what combination of elements tends to produce particularly creative brains. What, in short, is the essence of creativity? Over the course of my life, I’ve kept coming back to two more-specific questions: What differences in nature and nurture can explain why some people suffer from mental illness and some do not? And why are so many of the world’s most creative minds among the most afflicted? My latest study, for which I’ve been scanning the brains of some of today’s most illustrious scientists, mathematicians,
artists, and writers, has come closer to answering this second question than any other research to date.


 
THE FIRST ATTEMPTED EXAMINATIONS of the connection between genius and insanity were largely anecdotal. In his 1891 book, The Man of Genius, Cesare Lombroso, an Italian physician, provided a gossipy and expansive account of traits associated with genius—left-handedness, celibacy, stammering, precocity, and, of course, neurosis and psychosis—and he linked them to many creative individuals, including Jean-Jacques Rousseau, Sir Isaac Newton, Arthur Schopenhauer, Jonathan Swift, Charles Darwin, Lord Byron, Charles Baudelaire, and Robert Schumann. Lombroso speculated on various causes of lunacy and genius, ranging from heredity to urbanization to climate to the phases of the moon. He proposed a close association between genius and degeneracy and argued that both are hereditary. Francis Galton, a cousin of Charles Darwin, took a much more rigorous approach to the topic. In his 1869 book, Hereditary Genius, Galton used careful documentation—including detailed family trees showing the more than 20 eminent musicians among the Bachs, the three eminent writers among the Brontës, and so on—to demonstrate that genius appears to have a strong genetic component. He was also the first to explore in depth the relative contributions of nature and nurture to the development of genius.

As research methodology improved over time, the idea that genius might be hereditary gained support. For his 1904 Study of British Genius, the English physician Havelock Ellis twice reviewed the 66 volumes of The Dictionary of National Biography. In his first review, he identified individuals whose entries were three pages or longer. In his second review, he eliminated those who “displayed no high intellectual ability” and added those who had shorter entries but showed evidence of “intellectual ability of high order.” His final list consisted of 1,030 individuals, only 55 of whom were women. Much like Lombroso, he examined how heredity, general health, social class, and other factors may have contributed to his subjects’ intellectual distinction. Although Ellis’s approach was resourceful, his sample was limited, in that the subjects were relatively famous but not necessarily highly creative. He found that 8.2 percent of his overall sample of 1,030 suffered from melancholy and 4.2 percent from insanity. Because he was relying on historical data provided by the authors of The Dictionary of National Biography rather than direct contact, his numbers likely underestimated the prevalence of mental illness in his sample.

A more empirical approach can be found in the early-20th-century work of Lewis M. Terman, a Stanford psychologist whose multivolume Genetic Studies of Genius is one of the most legendary studies in American psychology. He used a longitudinal design—meaning he studied his subjects repeatedly over time—which was novel then, and the project eventually became the longest-running longitudinal study in the world. Terman himself had been a gifted child, and his interest in the study of genius derived from personal experience. (Within six months of starting school, at age 5, Terman was advanced to third grade—which was not seen at the time as a good thing; the prevailing belief was that precocity was abnormal and would produce problems in adulthood.) Terman also hoped to improve the measurement of “genius” and test Lombroso’s suggestion that it was associated with degeneracy.

In 1916, as a member of the psychology department at Stanford, Terman developed America’s first IQ test, drawing from a version developed by the French psychologist Alfred Binet. This test, known as the Stanford-Binet Intelligence Scales, contributed to the development of the Army Alpha, an exam the American military used during World War I to screen recruits and evaluate them for work assignments and determine whether they were worthy of officer status.

Terman eventually used the Stanford-Binet test to select high-IQ students for his longitudinal study, which began in 1921. His long-term goal was to recruit at least 1,000 students from grades three through eight who represented the smartest 1 percent of the urban California population in that age group. The subjects had to have an IQ greater than 135, as measured by the Stanford-Binet test. The recruitment process was intensive: students were first nominated by teachers, then given group tests, and finally subjected to individual Stanford-Binet tests. After various enrichments—adding some of the subjects’ siblings, for example—the final sample consisted of 856 boys and 672 girls. One finding that emerged quickly was that being the youngest student in a grade was an excellent predictor of having a high IQ. (This is worth bearing in mind today, when parents sometimes choose to hold back their children precisely so they will not be the youngest in their grades.)

These children were initially evaluated in all sorts of ways. Researchers took their early developmental histories, documented their play interests, administered medical examinations—including 37 different anthropometric measurements—and recorded how many books they’d read during the past two months, as well as the number of books available in their homes (the latter number ranged from zero to 6,000, with a mean of 328). These gifted children were then reevaluated at regular intervals throughout their lives.

“The Termites,” as Terman’s subjects have come to be known, have debunked some stereotypes and introduced new paradoxes. For example, they were generally physically superior to a comparison group—taller, healthier, more athletic. Myopia (no surprise) was the only physical deficit. They were also more socially mature and generally better adjusted. And these positive patterns persisted as the children grew into adulthood. They tended to have happy marriages and high salaries. So much for the concept of “early ripe and early rotten,” a common assumption when Terman was growing up.

But despite the implications of the title Genetic Studies of Genius, the Termites’ high IQs did not predict high levels of creative achievement later in life. Only a few made significant creative contributions to society; none appear to have demonstrated extremely high creativity levels of the sort recognized by major awards, such as the Nobel Prize. (Interestingly, William Shockley, who was a 12-year-old Palo Alto resident in 1922, somehow failed to make the cut for the study, even though he would go on to share a Nobel Prize in physics for the invention of the transistor.) Thirty percent of the men and 33 percent of the women did not even graduate from college. A surprising number of subjects pursued humble occupations, such as semiskilled trades or clerical positions. As the study evolved over the years, the term gifted was substituted for genius. Although many people continue to equate intelligence with genius, a crucial conclusion from Terman’s study is that having a high IQ is not equivalent to being highly creative. Subsequent studies by other researchers have reinforced Terman’s conclusions, leading to what’s known as the threshold theory, which holds that above a certain level, intelligence doesn’t have much effect on creativity: most creative people are pretty smart, but they don’t have to be that smart, at least as measured by conventional intelligence tests. An IQ of 120, indicating that someone is very smart but not exceptionally so, is generally considered sufficient for creative genius.


 
Kyle Bean

BUT IF HIGH IQ does not indicate creative genius, then what does? And how can one identify creative people for a study?

One approach, which is sometimes referred to as the study of “little c,” is to develop quantitative assessments of creativity—a necessarily controversial task, given that it requires settling on what creativity actually is. The basic concept that has been used in the development of these tests is skill in “divergent thinking,” or the ability to come up with many responses to carefully selected questions or probes, as contrasted with “convergent thinking,” or the ability to come up with the correct answer to problems that have only one answer. For example, subjects might be asked, “How many uses can you think of for a brick?” A person skilled in divergent thinking might come up with many varied responses, such as building a wall; edging a garden; and serving as a bludgeoning weapon, a makeshift shot put, a bookend. Like IQ tests, these exams can be administered to large groups of people. Assuming that creativity is a trait everyone has in varying amounts, those with the highest scores can be classified as exceptionally creative and selected for further study.

While this approach is quantitative and relatively objective, its weakness is that certain assumptions must be accepted: that divergent thinking is the essence of creativity, that creativity can be measured using tests, and that high-scoring individuals are highly creative people. One might argue that some of humanity’s most creative achievements have been the result of convergent thinking—a process that led to Newton’s recognition of the physical formulae underlying gravity, and Einstein’s recognition that E=mc2.

A second approach to defining creativity is the “duck test”: if it walks like a duck and quacks like a duck, it must be a duck. This approach usually involves selecting a group of people—writers, visual artists, musicians, inventors, business innovators, scientists—who have been recognized for some kind of creative achievement, usually through the awarding of major prizes (the Nobel, the Pulitzer, and so forth). Because this approach focuses on people whose widely recognized creativity sets them apart from the general population, it is sometimes referred to as the study of “big C.” The problem with this approach is its inherent subjectivity. What does it mean, for example, to have “created” something? Can creativity in the arts be equated with creativity in the sciences or in business, or should such groups be studied separately? For that matter, should science or business innovation be considered creative at all?

Although I recognize and respect the value of studying “little c,” I am an unashamed advocate of studying “big C.” I first used this approach in the mid-1970s and 1980s, when I conducted one of the first empirical studies of creativity and mental illness. Not long after I joined the psychiatry faculty of the Iowa College of Medicine, I ran into the chair of the department, a biologically oriented psychiatrist known for his salty language and male chauvinism. “Andreasen,” he told me, “you may be an M.D./Ph.D., but that Ph.D. of yours isn’t worth sh--, and it won’t count favorably toward your promotion.” I was proud of my literary background and believed that it made me a better clinician and a better scientist, so I decided to prove him wrong by using my background as an entry point to a scientific study of genius and insanity.

The University of Iowa is home to the Writers’ Workshop, the oldest and most famous creative-writing program in the United States (UNESCO has designated Iowa City as one of its seven “Cities of Literature,” along with the likes of Dublin and Edinburgh). Thanks to my time in the university’s English department, I was able to recruit study subjects from the workshop’s ranks of distinguished permanent and visiting faculty. Over the course of 15 years, I studied not only Kurt Vonnegut but Richard Yates, John Cheever, and 27 other well-known writers.


 
The writer Kurt Vonnegut came from a family with a long history of mental illness—and exceptional creativity. Above: Vonnegut (right) meets with Hollywood producer Mark Robson in 1971. (AP)

Going into the study, I keyed my hypotheses off the litany of famous people who I knew had personal or family histories of mental illness. James Joyce, for example, had a daughter who suffered from schizophrenia, and he himself had traits that placed him on the schizophrenia spectrum. (He was socially aloof and even cruel to those close to him, and his writing became progressively more detached from his audience and from reality, culminating in the near-psychotic neologisms and loose associations of Finnegans Wake.) Bertrand Russell, a philosopher whose work I admired, had multiple family members who suffered from schizophrenia. Einstein had a son with schizophrenia, and he himself displayed some of the social and interpersonal ineptitudes that can characterize the illness. Based on these clues, I hypothesized that my subjects would have an increased rate of schizophrenia in family members but that they themselves would be relatively well. I also hypothesized that creativity might run in families, based on prevailing views that the tendencies toward psychosis and toward having creative and original ideas were closely linked.

I began by designing a standard interview for my subjects, covering topics such as developmental, social, family, and psychiatric history, and work habits and approach to writing. Drawing on creativity studies done by the psychiatric epidemiologist Thomas McNeil, I evaluated creativity in family members by assigning those who had had very successful creative careers an A++ rating and those who had pursued creative interests or hobbies an A+.

My final challenge was selecting a control group. After entertaining the possibility of choosing a homogeneous group whose work is not usually considered creative, such as lawyers, I decided that it would be best to examine a more varied group of people from a mixture of professions, such as administrators, accountants, and social workers. I matched this control group with the writers according to age and educational level. By matching based on education, I hoped to match for IQ, which worked out well; both the test and the control groups had an average IQ of about 120. These results confirmed Terman’s findings that creative genius is not the same as high IQ. If having a very high IQ was not what made these writers creative, then what was?

As I began interviewing my subjects, I soon realized that I would not be confirming my schizophrenia hypothesis. If I had paid more attention to Sylvia Plath and Robert Lowell, who both suffered from what we today call mood disorder, and less to James Joyce and Bertrand Russell, I might have foreseen this. One after another, my writer subjects came to my office and spent three or four hours pouring out the stories of their struggles with mood disorder—mostly depression, but occasionally bipolar disorder. A full 80 percent of them had had some kind of mood disturbance at some time in their lives, compared with just 30 percent of the control group—only slightly less than an age-matched group in the general population. (At first I had been surprised that nearly all the writers I approached would so eagerly agree to participate in a study with a young and unknown assistant professor—but I quickly came to understand why they were so interested in talking to a psychiatrist.) The Vonneguts turned out to be representative of the writers’ families, in which both mood disorder and creativity were overrepresented—as with the Vonneguts, some of the creative relatives were writers, but others were dancers, visual artists, chemists, architects, or mathematicians. This is consistent with what some other studies have found. When the psychologist Kay Redfield Jamison looked at 47 famous writers and artists in Great Britain, she found that more than 38 percent had been treated for a mood disorder; the highest rates occurred among playwrights, and the second-highest among poets. When Joseph Schildkraut, a psychiatrist at Harvard Medical School, studied a group of 15 abstract-expressionist painters in the mid-20th century, he found that half of them had some form of mental illness, mostly depression or bipolar disorder; nearly half of these artists failed to live past age 60.


 
The brain of a genius: After completing her analysis of a creative person, the author provides the subject with a 3‐D model of his or her brain. (Mike Basher)

WHILE MY WORKSHOP STUDY answered some questions, it raised others. Why does creativity run in families? What is it that gets transmitted? How much is due to nature and how much to nurture? Are writers especially prone to mood disorders because writing is an inherently lonely and introspective activity? What would I find if I studied a group of scientists instead?

These questions percolated in my mind in the weeks, months, and eventually years after the study. As I focused my research on the neurobiology of severe mental illnesses, including schizophrenia and mood disorders, studying the nature of creativity—important as the topic was and is—seemed less pressing than searching for ways to alleviate the suffering of patients stricken with these dreadful and potentially lethal brain disorders. During the 1980s, new neuroimaging techniques gave researchers the ability to study patients’ brains directly, an approach I began using to answer questions about how and why the structure and functional activity of the brain is disrupted in some people with serious mental illnesses.

As I spent more time with neuroimaging technology, I couldn’t help but wonder what we would find if we used it to look inside the heads of highly creative people. Would we see a little genie that doesn’t exist inside other people’s heads?

Today’s neuroimaging tools show brain structure with a precision approximating that of the examination of post-mortem tissue; this allows researchers to study all sorts of connections between brain measurements and personal characteristics. For example, we know that London taxi drivers, who must memorize maps of the city to earn a hackney’s license, have an enlarged hippocampus—a key memory region—as demonstrated in a magnetic-resonance-imaging, or MRI, study. (They know it, too: on a recent trip to London, I was proudly regaled with this information by several different taxi drivers.) Imaging studies of symphony-orchestra musicians have found them to possess an unusually large Broca’s area—a part of the brain in the left hemisphere that is associated with language—along with other discrepancies. Using another technique, functional magnetic resonance imaging (fMRI), we can watch how the brain behaves when engaged in thought.

Designing neuroimaging studies, however, is exceedingly tricky. Capturing human mental processes can be like capturing quicksilver. The brain has as many neurons as there are stars in the Milky Way, each connected to other neurons by billions of spines, which contain synapses that change continuously depending on what the neurons have recently learned. Capturing brain activity using imaging technology inevitably leads to oversimplifications, as sometimes evidenced by news reports that an investigator has found the location of something—love, guilt, decision making—in a single region of the brain.

And what are we even looking for when we search for evidence of “creativity” in the brain? Although we have a definition of creativity that many people accept—the ability to produce something that is novel or original and useful or adaptive—achieving that “something” is part of a complex process, one often depicted as an “aha” or “eureka” experience. This narrative is appealing—for example, “Newton developed the concept of gravity around 1666, when an apple fell on his head while he was meditating under an apple tree.” The truth is that by 1666, Newton had already spent many years teaching himself the mathematics of his time (Euclidean geometry, algebra, Cartesian coordinates) and inventing calculus so that he could measure planetary orbits and the area under a curve. He continued to work on his theory of gravity over the subsequent years, completing the effort only in 1687, when he published Philosophiœ Naturalis Principia Mathematica. In other words, Newton’s formulation of the concept of gravity took more than 20 years and included multiple components: preparation, incubation, inspiration—a version of the eureka experience—and production. Many forms of creativity, from writing a novel to discovering the structure of DNA, require this kind of ongoing, iterative process.

With functional magnetic resonance imaging, the best we can do is capture brain activity during brief moments in time while subjects are performing some task. For instance, observing brain activity while test subjects look at photographs of their relatives can help answer the question of which parts of the brain people use when they recognize familiar faces. Creativity, of course, cannot be distilled into a single mental process, and it cannot be captured in a snapshot—nor can people produce a creative insight or thought on demand. I spent many years thinking about how to design an imaging study that could identify the unique features of the creative brain.


 
The images on the left show the brain of a creative subject (top) and a matched control subject during a word‐association task. The images on the right show brain activation as the subjects alternate between an experimental task (word association) and a control task (reading a word). The line representing the creative subject’s brain activation moves smoothly up and down as the task changes, reflecting effective use of the association cortices in making connections. The control subject’s activation line looks ragged by comparison.

MOST OF THE HUMAN BRAIN'S high-level functions arise from the six layers of nerve cells and their dendrites embedded in its enormous surface area, called the cerebral cortex, which is compressed to a size small enough to be carried around on our shoulders through a process known as gyrification—essentially, producing lots of folds. Some regions of the brain are highly specialized, receiving sensory information from our eyes, ears, skin, mouth, or nose, or controlling our movements. We call these regions the primary visual, auditory, sensory, and motor cortices. They collect information from the world around us and execute our actions. But we would be helpless, and effectively nonhuman, if our brains consisted only of these regions.

In fact, the most extensively developed regions in the human brain are known as association cortices. These regions help us interpret and make use of the specialized information collected by the primary visual, auditory, sensory, and motor regions. For example, as you read these words on a page or a screen, they register as black lines on a white background in your primary visual cortex. If the process stopped at that point, you wouldn’t be reading at all. To read, your brain, through miraculously complex processes that scientists are still figuring out, needs to forward those black letters on to association-cortex regions such as the angular gyrus, so that meaning is attached to them; and then on to language-association regions in the temporal lobes, so that the words are connected not only to one another but also to their associated memories and given richer meanings. These associated memories and meanings constitute a “verbal lexicon,” which can be accessed for reading, speaking, listening, and writing. Each person’s lexicon is a bit different, even if the words themselves are the same, because each person has different associated memories and meanings. One difference between a great writer like Shakespeare and, say, the typical stockbroker is the size and richness of the verbal lexicon in his or her temporal association cortices, as well as the complexity of the cortices’ connections with other association regions in the frontal and parietal lobes.

A neuroimaging study I conducted in 1995 using positron-emission tomography, or PET, scanning turned out to be unexpectedly useful in advancing my own understanding of association cortices and their role in the creative process.

This PET study was designed to examine the brain’s different memory systems, which the great Canadian psychologist Endel Tulving identified. One system, episodic memory, is autobiographical—it consists of information linked to an individual’s personal experiences. It is called “episodic” because it consists of time-linked sequential information, such as the events that occurred on a person’s wedding day. My team and I compared this with another system, that of semantic memory, which is a repository of general information and is not personal or time-linked. In this study, we divided episodic memory into two subtypes. We examined focused episodic memory by asking subjects to recall a specific event that had occurred in the past and to describe it with their eyes closed. And we examined a condition that we called random episodic silent thought, or REST: we asked subjects to lie quietly with their eyes closed, to relax, and to think about whatever came to mind. In essence, they would be engaged in “free association,” letting their minds wander. The acronym REST was intentionally ironic; we suspected that the association regions of the brain would actually be wildly active during this state.

This suspicion was based on what we had learned about free association from the psychoanalytic approach to understanding the mind. In the hands of Freud and other psychoanalysts, free association—spontaneously saying whatever comes to mind without censorship—became a window into understanding unconscious processes. Based on my interviews with the creative subjects in my workshop study, and from additional conversations with artists, I knew that such unconscious processes are an important component of creativity. For example, Neil Simon told me: “I don’t write consciously—it is as if the muse sits on my shoulder” and “I slip into a state that is apart from reality.” (Examples from history suggest the same thing. Samuel Taylor Coleridge once described how he composed an entire 300-line poem about Kubla Khan while in an opiate-induced, dreamlike state, and began writing it down when he awoke; he said he then lost most of it when he got interrupted and called away on an errand—thus the finished poem he published was but a fragment of what originally came to him in his dreamlike state.)

Based on all this, I surmised that observing which parts of the brain are most active during free association would give us clues about the neural basis of creativity. And what did we find? Sure enough, the association cortices were wildly active during REST.

I realized that I obviously couldn’t capture the entire creative process—instead, I could home in on the parts of the brain that make creativity possible. Once I arrived at this idea, the design for the imaging studies was obvious: I needed to compare the brains of highly creative people with those of control subjects as they engaged in tasks that activated their association cortices.

For years, I had been asking myself what might be special or unique about the brains of the workshop writers I had studied. In my own version of a eureka moment, the answer finally came to me: creative people are better at recognizing relationships, making associations and connections, and seeing things in an original way—seeing things that others cannot see. To test this capacity, I needed to study the regions of the brain that go crazy when you let your thoughts wander. I needed to target the association cortices. In addition to REST, I could observe people performing simple tasks that are easy to do in an MRI scanner, such as word association, which would permit me to compare highly creative people—who have that “genie in the brain”—with the members of a control group matched by age and education and gender, people who have “ordinary creativity” and who have not achieved the levels of recognition that characterize highly creative people. I was ready to design Creativity Study II.

THIS TIME AROUND, I wanted to examine a more diverse sample of creativity, from the sciences as well as the arts. My motivations were partly selfish—I wanted the chance to discuss the creative process with people who might think and work differently, and I thought I could probably learn a lot by listening to just a few people from specific scientific fields. After all, each would be an individual jewel—a fascinating study on his or her own. Now that I’m about halfway through the study, I can say that this is exactly what has happened. My individual jewels so far include, among others, the filmmaker George Lucas, the mathematician and Fields Medalist William Thurston, the Pulitzer Prize–winning novelist Jane Smiley, and six Nobel laureates from the fields of chemistry, physics, and physiology or medicine. Because winners of major awards are typically older, and because I wanted to include some younger people, I’ve also recruited winners of the National Institutes of Health Pioneer Award and other prizes in the arts.

Apart from stating their names, I do not have permission to reveal individual information about my subjects. And because the study is ongoing (each subject can take as long as a year to recruit, making for slow progress), we do not yet have any definitive results—though we do have a good sense of the direction that things are taking. By studying the structural and functional characteristics of subjects’ brains in addition to their personal and family histories, we are learning an enormous amount about how creativity occurs in the brain, as well as whether these scientists and artists display the same personal or familial connections to mental illness that the subjects in my Iowa Writers’ Workshop study did.

To participate in the study, each subject spends three days in Iowa City, since it is important to conduct the research using the same MRI scanner. The subjects and I typically get to know each other over dinner at my home (and a bottle of Bordeaux from my cellar), and by prowling my 40-acre nature retreat in an all-terrain vehicle, observing whatever wildlife happens to be wandering around. Relaxing together and getting a sense of each other’s human side is helpful going into the day and a half of brain scans and challenging conversations that will follow.

We begin the actual study with an MRI scan, during which subjects perform three different tasks, in addition to REST: word association, picture association, and pattern recognition. Each experimental task alternates with a control task; during word association, for example, subjects are shown words on a screen and asked to either think of the first word that comes to mind (the experimental task) or silently repeat the word they see (the control task). Speaking disrupts the scanning process, so subjects silently indicate when they have completed a task by pressing a button on a keypad.

Playing word games inside a thumping, screeching hollow tube seems like a far cry from the kind of meandering, spontaneous discovery process that we tend to associate with creativity. It is, however, as close as one can come to a proxy for that experience, apart from REST. You cannot force creativity to happen—every creative person can attest to that. But the essence of creativity is making connections and solving puzzles. The design of these MRI tasks permits us to visualize what is happening in the creative brain when it’s doing those things.

As I hypothesized, the creative people have shown stronger activations in their association cortices during all four tasks than the controls have. (See the images on page 74.) This pattern has held true for both the artists and the scientists, suggesting that similar brain processes may underlie a broad spectrum of creative expression. Common stereotypes about “right brained” versus “left brained” people notwithstanding, this parallel makes sense. Many creative people are polymaths, people with broad interests in many fields—a common trait among my study subjects.

After the brain scans, I settle in with subjects for an in-depth interview. Preparing for these interviews can be fun (rewatching all of George Lucas’s films, for example, or reading Jane Smiley’s collected works) as well as challenging (toughing through mathematics papers by William Thurston). I begin by asking subjects about their life history—where they grew up, where they went to school, what activities they enjoyed. I ask about their parents—their education, occupation, and parenting style—and about how the family got along. I learn about brothers, sisters, and children, and get a sense for who else in a subject’s family is or has been creative and how creativity may have been nurtured at home. We talk about how the subjects managed the challenges of growing up, any early interests and hobbies (particularly those related to the creative activities they pursue as adults), dating patterns, life in college and graduate school, marriages, and child-rearing. I ask them to describe a typical day at work and to think through how they have achieved such a high level of creativity. (One thing I’ve learned from this line of questioning is that creative people work much harder than the average person—and usually that’s because they love their work.)

One of the most personal and sometimes painful parts of the interview is when I ask about mental illness in subjects’ families as well as in their own lives. They’ve told me about such childhood experiences as having a mother commit suicide or watching ugly outbreaks of violence between two alcoholic parents, and the pain and scars that these experiences have inflicted. (Two of the 13 creative subjects in my current study have lost a parent to suicide—a rate many times that of the general U.S. population.) Talking with those subjects who have suffered from a mental illness themselves, I hear about how it has affected their work and how they have learned to cope.


The author’s research on creativity includes in-depth neurological studies of “individual jewels,” including Pulitzer Prize–winning novelist Jane Smiley, shown here in 1991. (AP)

SO FAR, THIS STUDY—which has examined 13 creative geniuses and 13 controls—has borne out a link between mental illness and creativity similar to the one I found in my Writers’ Workshop study. The creative subjects and their relatives have a higher rate of mental illness than the controls and their relatives do (though not as high a rate as I found in the first study), with the frequency being fairly even across the artists and the scientists. The most-common diagnoses include bipolar disorder, depression, anxiety or panic disorder, and alcoholism. I’ve also found some evidence supporting my early hypothesis that exceptionally creative people are more likely than control subjects to have one or more first-degree relatives with schizophrenia. Interestingly, when the physician and researcher Jon L. Karlsson examined the relatives of everyone listed in Iceland’s version of Who’s Who in the 1940s and ’60s, he found that they had higher-than-average rates of schizophrenia. Leonard Heston, a former psychiatric colleague of mine at Iowa, conducted an influential study of the children of schizophrenic mothers raised from infancy by foster or adoptive parents, and found that more than 10 percent of these children developed schizophrenia, as compared with zero percent of a control group. This suggests a powerful genetic component to schizophrenia. Heston and I discussed whether some particularly creative people owe their gifts to a subclinical variant of schizophrenia that loosens their associative links sufficiently to enhance their creativity but not enough to make them mentally ill.

As in the first study, I’ve also found that creativity tends to run in families, and to take diverse forms. In this arena, nurture clearly plays a strong role. Half the subjects come from very high-achieving backgrounds, with at least one parent who has a doctoral degree. The majority grew up in an environment where learning and education were highly valued. This is how one person described his childhood:
Our family evenings—just everybody sitting around working. We’d all be in the same room, and [my mother] would be working on her papers, preparing her lesson plans, and my father had huge stacks of papers and journals … This was before laptops, and so it was all paper-based. And I’d be sitting there with my homework, and my sisters are reading. And we’d just spend a few hours every night for 10 to 15 years—that’s how it was. Just working together. No TV.
So why do these highly gifted people experience mental illness at a higher-than-average rate? Given that (as a group) their family members have higher rates than those that occur in the general population or in the matched comparison group, we must suspect that nature plays a role—that Francis Galton and others were right about the role of hereditary factors in people’s predisposition to both creativity and mental illness. We can only speculate about what those factors might be, but there are some clues in how these people describe themselves and their lifestyles.

One possible contributory factor is a personality style shared by many of my creative subjects. These subjects are adventuresome and exploratory. They take risks. Particularly in science, the best work tends to occur in new frontiers. (As a popular saying among scientists goes: “When you work at the cutting edge, you are likely to bleed.”) They have to confront doubt and rejection. And yet they have to persist in spite of that, because they believe strongly in the value of what they do. This can lead to psychic pain, which may manifest itself as depression or anxiety, or lead people to attempt to reduce their discomfort by turning to pain relievers such as alcohol.

I’ve been struck by how many of these people refer to their most creative ideas as “obvious.” Since these ideas are almost always the opposite of obvious to other people, creative luminaries can face doubt and resistance when advocating for them. As one artist told me, “The funny thing about [one’s own] talent is that you are blind to it. You just can’t see what it is when you have it … When you have talent and see things in a particular way, you are amazed that other people can’t see it.” Persisting in the face of doubt or rejection, for artists or for scientists, can be a lonely path—one that may also partially explain why some of these people experience mental illness.

ONE INTERESTING PARADOX that has emerged during conversations with subjects about their creative processes is that, though many of them suffer from mood and anxiety disorders, they associate their gifts with strong feelings of joy and excitement. “Doing good science is simply the most pleasurable thing anyone can do,” one scientist told me. “It is like having good sex. It excites you all over and makes you feel as if you are all-powerful and complete.” This is reminiscent of what creative geniuses throughout history have said. For instance, here’s Tchaikovsky, the composer, writing in the mid-19th century:
It would be vain to try to put into words that immeasurable sense of bliss which comes over me directly a new idea awakens in me and begins to assume a different form. I forget everything and behave like a madman. Everything within me starts pulsing and quivering; hardly have I begun the sketch ere one thought follows another.
Another of my subjects, a neuroscientist and an inventor, told me, “There is no greater joy that I have in my life than having an idea that’s a good idea. At that moment it pops into my head, it is so deeply satisfying and rewarding … My nucleus accumbens is probably going nuts when it happens.” (The nucleus accumbens, at the core of the brain’s reward system, is activated by pleasure, whether it comes from eating good food or receiving money or taking euphoria-inducing drugs.)

As for how these ideas emerge, almost all of my subjects confirmed that when eureka moments occur, they tend to be precipitated by long periods of preparation and incubation, and to strike when the mind is relaxed—during that state we called REST. “A lot of it happens when you are doing one thing and you’re not thinking about what your mind is doing,” one of the artists in my study told me. “I’m either watching television, I’m reading a book, and I make a connection … It may have nothing to do with what I am doing, but somehow or other you see something or hear something or do something, and it pops that connection together.”

Many subjects mentioned lighting on ideas while showering, driving, or exercising. One described a more unusual regimen involving an afternoon nap: “It’s during this nap that I get a lot of my work done. I find that when the ideas come to me, they come as I’m falling asleep, they come as I’m waking up, they come if I’m sitting in the tub. I don’t normally take baths … but sometimes I’ll just go in there and have a think.”

SOME OF THE OTHER most common findings my studies have suggested include:

Many creative people are autodidacts. They like to teach themselves, rather than be spoon-fed information or knowledge in standard educational settings. Famously, three Silicon Valley creative geniuses have been college dropouts: Bill Gates, Steve Jobs, and Mark Zuckerberg. Steve Jobs—for many, the archetype of the creative person—popularized the motto “Think different.” Because their thinking is different, my subjects often express the idea that standard ways of learning and teaching are not always helpful and may even be distracting, and that they prefer to learn on their own. Many of my subjects taught themselves to read before even starting school, and many have read widely throughout their lives. For example, in his article “On Proof and Progress in Mathematics,” Bill Thurston wrote:
My mathematical education was rather independent and idiosyncratic, where for a number of years I learned things on my own, developing personal mental models for how to think about mathematics. This has often been a big advantage for me in thinking about mathematics, because it’s easy to pick up later the standard mental models shared by groups of mathematicians.
This observation has important implications for the education of creatively gifted children. They need to be allowed and even encouraged to “think different.” (Several subjects described to me how they would get in trouble in school for pointing out when their teachers said things that they knew to be wrong, such as when a second-grade teacher explained to one of my subjects that light and sound are both waves and travel at the same speed. The teacher did not appreciate being corrected.)

Many creative people are polymaths, as historic geniuses including Michelangelo and Leonardo da Vinci were. George Lucas was awarded not only the National Medal of Arts in 2012 but also the National Medal of Technology in 2004. Lucas’s interests include anthropology, history, sociology, neuroscience, digital technology, architecture, and interior design. Another polymath, one of the scientists, described his love of literature:
I love words, and I love the rhythms and sounds of words … [As a young child] I very rapidly built up a huge storehouse of … Shakespearean sonnets, soliloquies, poems across the whole spectrum … When I got to college, I was open to many possible careers. I actually took a creative-writing course early. I strongly considered being a novelist or a writer or a poet, because I love words that much … [But for] the academics, it’s not so much about the beauty of the words. So I found that dissatisfying, and I took some biology courses, some quantum courses. I really clicked with biology. It seemed like a complex system that was tractable, beautiful, important. And so I chose biochemistry.
The arts and the sciences are seen as separate tracks, and students are encouraged to specialize in one or the other. If we wish to nurture creative students, this may be a serious error.

Creative people tend to be very persistent, even when confronted with skepticism or rejection. Asked what it takes to be a successful scientist, one replied:
Perseverance … In order to have that freedom to find things out, you have to have perseverance … The grant doesn’t get funded, and the next day you get up, and you put the next foot in front, and you keep putting your foot in front … I still take things personally. I don’t get a grant, and … I’m upset for days. And then I sit down and I write the grant again.
DO CREATIVE PEOPLE simply have more ideas, and therefore differ from average people only in a quantitative way, or are they also qualitatively different? One subject, a neuroscientist and an inventor, addressed this question in an interesting way, conceptualizing the matter in terms of kites and strings:
In the R&D business, we kind of lump people into two categories: inventors and engineers. The inventor is the kite kind of person. They have a zillion ideas and they come up with great first prototypes. But generally an inventor … is not a tidy person. He sees the big picture and … [is] constantly lashing something together that doesn’t really work. And then the engineers are the strings, the craftsmen [who pick out a good idea] and make it really practical. So, one is about a good idea, the other is about … making it practical.
Of course, having too many ideas can be dangerous. One subject, a scientist who happens to be both a kite and a string, described to me “a willingness to take an enormous risk with your whole heart and soul and mind on something where you know the impact—if it worked—would be utterly transformative.” The if here is significant. Part of what comes with seeing connections no one else sees is that not all of these connections actually exist. “Everybody has crazy things they want to try,” that same subject told me. “Part of creativity is picking the little bubbles that come up to your conscious mind, and picking which one to let grow and which one to give access to more of your mind, and then have that translate into action.”

In A Beautiful Mind, her biography of the mathematician John Nash, Sylvia Nasar describes a visit Nash received from a fellow mathematician while institutionalized at McLean Hospital. “How could you, a mathematician, a man devoted to reason and logical truth,” the colleague asked, “believe that extraterrestrials are sending you messages? How could you believe that you are being recruited by aliens from outer space to save the world?” To which Nash replied: “Because the ideas I had about supernatural beings came to me the same way that my mathematical ideas did. So I took them seriously.”

Some people see things others cannot, and they are right, and we call them creative geniuses. Some people see things others cannot, and they are wrong, and we call them mentally ill. And some people, like John Nash, are both.

The Internet’s Own Boy: The Story of Aaron Swartz - New Documentary Is Free Online


This story is a tragedy, in my opinion. Aaron Swartz was being made an example of for having embarrassed the government on a couple of occasions. Even the case for which charges were finally brought did not cause any financial harm to his target (JSTOR), who urged the government to drop the charges. The Feds refused - Swartz's conviction would serve as a warning. Instead, the young man hanged himself in his NYC apartment.

Here is a key passage that explains why so many of us supported Swartz's "work":
Swartz’s manifesto didn’t just call for the widespread illegal downloading and sharing of copyrighted scientific and academic material, which was already a dangerous idea. It explained why. Much of the academic research held under lock and key by large institutional publishers like Reed Elsevier had been largely funded at public expense, but was now being treated as private property – and as Swartz understood, that was just one example of a massive ideological victory for corporate interests that had penetrated almost every aspect of society. The actual data theft for which Swartz was prosecuted, the download of a large volume of journal articles from the academic database called JSTOR, was largely symbolic and arguably almost pointless. (As a Harvard graduate student at the time, Swartz was entitled to read anything on JSTOR.)
Academic publishers like Reed Elsevier, JSTOR, Science Direct, Nature, Hindawi, Springer, and others control nearly all of the published research in nearly every field, much of which is funded by tax dollars either directly or indirectly.

These publishers then charge authors hundreds [sometimes thousands] of dollars to publish, and charge more if the author wants open access; they charge for images in articles; they charge libraries hundreds of dollars for subscriptions, even digital subscriptions; and they try to charge consumers (like me) between $30 and $70 for use of an article (often on 24 hours).

Anyway, first up here is a review of the film and the life of its subject, via Salon, followed by an open access version of the film from Open Culture.

“The Internet’s Own Boy”: How the government destroyed Aaron Swartz

A film tells the story of the coder-activist who fought corporate power and corruption -- and paid a cruel price

Andrew O'Hehir |



Aaron Swartz (Credit: TakePart/Noah Berger)

Brian Knappenberger’s Kickstarter-funded documentary The Internet’s Own Boy: The Story of Aaron Swartz, which premiered at Sundance barely a year after the legendary hacker, programmer and information activist took his own life in January 2013, feels like the beginning of a conversation about Swartz and his legacy rather than the final word. This week it will be released in theaters, arriving in the middle of an evolving debate about what the Internet is, whose interests it serves and how best to manage it, now that the techno-utopian dreams that sounded so great in Wired magazine circa 1996 have begun to ring distinctly hollow.

What surprised me when I wrote about “The Internet’s Own Boy” from Sundance was the snarky, dismissive and downright hostile tone struck by at least a few commenters. There was a certain dark symmetry to it, I thought at the time: A tragic story about the downfall, destruction and death of an Internet idealist calls up all of the medium’s most distasteful qualities, including its unique ability to transform all discourse into binary and ill-considered nastiness, and its empowerment of the chorus of belittlers and begrudgers collectively known as trolls. In retrospect, I think the symbolism ran even deeper. Aaron Swartz’s life and career exemplified a central conflict within Internet culture, and one whose ramifications make many denizens of the Web highly uncomfortable.

For many of its pioneers, loyalists and self-professed deep thinkers, the Internet was conceived as a digital demi-paradise, a zone of total freedom and democracy. But when it comes to specifics things get a bit dicey. Paradise for whom, exactly, and what do we mean by democracy? In one enduringly popular version of this fantasy, the Internet is the ultimate libertarian free market, a zone of perfect entrepreneurial capitalism untrammeled by any government, any regulation or any taxation. As a teenage programming prodigy with an unusually deep understanding of the Internet’s underlying architecture, Swartz certainly participated in the private-sector, junior-millionaire version of the Internet. He founded his first software company following his freshman year at Stanford, and became a partner in the development of Reddit in 2006, which was sold to Condé Nast later that year.

That libertarian vision of the Internet – and of society too, for that matter – rests on an unacknowledged contradiction, in that some form of state power or authority is presumably required to enforce private property rights, including copyrights, patents and other forms of intellectual property. Indeed, this is one of the principal contradictions embedded within our current form of capitalism, as the Marxist scholar David Harvey notes: Those who claim to venerate private property above all else actually depend on an increasingly militarized and autocratic state. And from the beginning of Swartz’s career he also partook of the alternate vision of the Internet, the one with a more anarchistic or anarcho-socialist character. When he was 15 years old he participated in the launch of Creative Commons, the immensely important content-sharing nonprofit, and at age 17 he helped design Markdown, an open-source, newbie-friendly markup format that remains in widespread use.

One can certainly construct an argument that these ideas about the character of the Internet are not fundamentally incompatible, and may coexist peaceably enough. In the physical world we have public parks and privately owned supermarkets, and we all understand that different rules (backed of course by militarized state power) govern our conduct in each space. But there is still an ideological contest between the two, and the logic of the private sector has increasingly invaded the public sphere and undermined the ancient notion of the public commons. (Former New York Mayor Rudy Giuliani once proposed that city parks should charge admission fees.) As an adult Aaron Swartz took sides in this contest, moving away from the libertarian Silicon Valley model of the Internet and toward a more radical and social conception of the meaning of freedom and equality in the digital age. It seems possible and even likely that the Guerilla Open Access Manifesto Swartz wrote in 2008, at age 21, led directly to his exaggerated federal prosecution for what was by any standard a minor hacking offense.

Swartz’s manifesto didn’t just call for the widespread illegal downloading and sharing of copyrighted scientific and academic material, which was already a dangerous idea. It explained why. Much of the academic research held under lock and key by large institutional publishers like Reed Elsevier had been largely funded at public expense, but was now being treated as private property – and as Swartz understood, that was just one example of a massive ideological victory for corporate interests that had penetrated almost every aspect of society. The actual data theft for which Swartz was prosecuted, the download of a large volume of journal articles from the academic database called JSTOR, was largely symbolic and arguably almost pointless. (As a Harvard graduate student at the time, Swartz was entitled to read anything on JSTOR.)

But the symbolism was important: Swartz posed a direct challenge to the private-sector creep that has eaten away at any notion of the public commons or the public good, whether in the digital or physical worlds, and he also sought to expose the fact that in our age state power is primarily the proxy or servant of corporate power. He had already embarrassed the government twice previously. In 2006, he downloaded and released the entire bibliographic dataset of the Library of Congress, a public document for which the library had charged an access fee. In 2008, he downloaded and released about 2.7 million federal court documents stored in the government database called PACER, which charged 8 cents a page for public records that by definition had no copyright. In both cases, law enforcement ultimately concluded Swartz had committed no crime: Dispensing public information to the public turns out to be legal, even if the government would rather you didn’t. The JSTOR case was different, and the government saw its chance (one could argue) to punish him at last.

Knappenberger could only have made this film with the cooperation of Swartz’s family, which was dealing with a devastating recent loss. In that context, it’s more than understandable that he does not inquire into the circumstances of Swartz’s suicide in “Inside Edition”-level detail. It’s impossible to know anything about Swartz’s mental condition from the outside – for example, whether he suffered from undiagnosed depressive illness – but it seems clear that he grew increasingly disheartened over the government’s insistence that he serve prison time as part of any potential plea bargain. Such an outcome would have left him a convicted felon and, he believed, would have doomed his political aspirations; one can speculate that was the point. Carmen Ortiz, the U.S. attorney for Boston, along with her deputy Stephen Heymann, did more than throw the book at Swartz. They pretty much had to write it first, concocting an imaginative list of 13 felony indictments that carried a potential total of 50 years in federal prison.

As Knappenberger explained in a Q&A session at Sundance, that’s the correct context in which to understand Robert Swartz’s public remark that the government had killed his son. He didn’t mean that Aaron had actually been assassinated by the CIA, but rather that he was a fragile young man who had been targeted as an enemy of the state, held up as a public whipping boy, and hounded into severe psychological distress. Of course that cannot entirely explain what happened; Ortiz and Heymann, along with whoever above them in the Justice Department signed off on their display of prosecutorial energy, had no reason to expect that Swartz would kill himself. There’s more than enough pain and blame to go around, and purely on a human level it’s difficult to imagine what agony Swartz’s family and friends have put themselves through.

One of the most painful moments in “The Internet’s Own Boy” arrives when Quinn Norton, Swartz’s ex-girlfriend, struggles to explain how and why she wound up accepting immunity from prosecution in exchange for information about her former lover. Norton’s role in the sequence of events that led to Swartz hanging himself in his Brooklyn apartment 18 months ago has been much discussed by those who have followed this tragic story. I think the first thing to say is that Norton has been very forthright in talking about what happened, and clearly feels torn up about it.

Norton was a single mom living on a freelance writer’s income, who had been threatened with an indictment that could have cost her both her child and her livelihood. When prosecutors offered her an immunity deal, her lawyer insisted she should take it. For his part, Swartz’s attorney says he doesn’t think Norton told the feds anything that made Swartz’s legal predicament worse, but she herself does not agree. It was apparently Norton who told the government that Swartz had written the 2008 manifesto, which had spread far and wide in hacktivist circles. Not only did the manifesto explain why Swartz had wanted to download hundreds of thousands of copyrighted journal articles on JSTOR, it suggested what he wanted to do with them and framed it as an act of resistance to the private-property knowledge industry.

Amid her grief and guilt, Norton also expresses an even more appropriate emotion: the rage of wondering how in hell we got here. How did we wind up with a country where an activist is prosecuted like a major criminal for downloading articles from a database for noncommercial purposes, while no one goes to prison for the immense financial fraud of 2008 that bankrupted millions? As a person who has made a living as an Internet “content provider” for almost 20 years, I’m well aware that we can’t simply do away with the concept of copyright or intellectual property. I never download pirated movies, not because I care so much about the bottom line at Sony or Warner Bros., but because it just doesn’t feel right, and because you can never be sure who’s getting hurt. We’re not going to settle the debate about intellectual property rights in the digital age in a movie review, but we can say this: Aaron Swartz had chosen his targets carefully, and so did the government when it fixed its sights on him. (In fact, JSTOR suffered no financial loss, and urged the feds to drop the charges. They refused.)

A clean and straightforward work of advocacy cinema, blending archival footage and contemporary talking-head interviews, Knappenberger’s film makes clear that Swartz was always interested in the social and political consequences of technology. By the time he reached adulthood he began to see political power, in effect, as another system of control that could be hacked, subverted and turned to unintended purposes. In the late 2000s, Swartz moved rapidly through a variety of politically minded ventures, including a good-government site and several different progressive advocacy groups. He didn’t live long enough to learn about Edward Snowden or the NSA spy campaigns he exposed, but Swartz frequently spoke out against the hidden and dangerous nature of the security state, and played a key role in the 2011-12 campaign to defeat the Stop Online Piracy Act (SOPA), a far-reaching government-oversight bill that began with wide bipartisan support and appeared certain to sail through Congress. That campaign, and the Internet-wide protest of American Censorship Day in November 2011, looks in retrospect like the digital world’s political coming of age.

Earlier that year, Swartz had been arrested by MIT campus police, after they noticed that someone had plugged a laptop into a network switch in a server closet. He was clearly violating some campus rules and likely trespassing, but as the New York Times observed at the time, the arrest and subsequent indictment seemed to defy logic: Could downloading articles that he was legally entitled to read really be considered hacking? Wasn’t this the digital equivalent of ordering 250 pancakes at an all-you-can-eat breakfast? The whole incident seemed like a momentary blip in Swartz’s blossoming career – a terms-of-service violation that might result in academic censure, or at worst a misdemeanor conviction.

Instead, for reasons that have never been clear, Ortiz and Heymann insisted on a plea deal that would have sent Swartz to prison for six months, an unusually onerous sentence for an offense with no definable victim and no financial motive. Was he specifically singled out as a political scapegoat by Eric Holder or someone else in the Justice Department? Or was he simply bulldozed by a prosecutorial bureaucracy eager to justify its own existence? We will almost certainly never know for sure, but as numerous people in “The Internet’s Own Boy” observe, the former scenario cannot be dismissed easily. Young computer geniuses who embrace the logic of private property and corporate power, who launch start-ups and seek to join the 1 percent before they’re 25, are the heroes of our culture. Those who use technology to empower the public commons and to challenge the intertwined forces of corporate greed and state corruption, however, are the enemies of progress and must be crushed.


”The Internet’s Own Boy” opens this week in Atlanta, Boston, Chicago, Cleveland, Denver, Los Angeles, Miami, New York, Toronto, Washington and Columbus, Ohio. It opens June 30 in Vancouver, Canada; July 4 in Phoenix, San Francisco and San Jose, Calif.; and July 11 in Seattle, with other cities to follow. It’s also available on-demand from Amazon, Google Play, iTunes, Vimeo, Vudu and other providers.

* * * * *

Luckily for us (especially those of us in a town too small to get a showing of this film, or who can't afford to pay per view), there is an open access version of the film available online.


The Internet’s Own Boy: New Documentary About Aaron Swartz Now Free Online

Open Culture | June 29th, 2014

On BoingBoing today, Cory Doctorow writes: “The Creative Commons-licensed version of The Internet’s Own Boy, Brian Knappenberger’s documentary about Aaron Swartz, is now available on the Internet Archive, which is especially useful for people outside of the US, who aren’t able to pay to see it online…. The Internet Archive makes the movie available to download or stream, in MPEG 4 and Ogg. There’s also a torrentable version.”

According to the film summary, the new documentary “depicts the life of American computer programmer, writer, political organizer and Internet activist Aaron Swartz. It features interviews with his family and friends as well as the internet luminaries who worked with him. The film tells his story up to his eventual suicide after a legal battle, and explores the questions of access to information and civil liberties that drove his work.”

The Internet’s Own Boy will be added to our collection, 200 Free Documentaries Online, part of our larger collection, 675 Free Movies Online: Great Classics, Indies, Noir, Westerns, etc..

Monday, June 30, 2014

Cultural Contexts, Developmental Capacities, and the Meta-Narratives of Ritual Abuse Survivors

 

I have not seen any good and comprehensive work on this topic, so I want the throw out some ideas and see what sticks. You can view this as me thinking out loud - I have no investment in being "right," I simply seek a framework within which to conceptualize cases.

Few clients we see as therapists are as challenging in their insistence on the meta-narratives of their abuse as survivors of ritual abuse. We can always work with them "as if" their stories are true and accurate, but we then run the risk of validating harmful and often pre-rational beliefs (especially with satanic abuse narratives).

In my work as a sexual trauma counselor, and as one who specializes in clients manifesting dissociative and "psychotic" symptoms, I see more claims of ritual abuse than most therapists I know. I have read Colin Ross's controversial book, Satanic Ritual Abuse: Principles of Treatment, which keeps an open mind to the possibility of organized ritual abuse. Ross recommends that, in treatment, the therapist adopt "an attitude hovering between disbelief and credulous entrapment" (from the publisher's blurb).

I'm not interested in proving or disproving the existence of vast networks of satanic ritual abuse - in part, because I see other meta-narratives in the clients with whom I work, not just the satanic ritual aspect. There is also the issue of the client experiencing a rejection of the details of their narrative as a rejection of their experience, as well. That can only be destructive and does not serve the client.

What I am interested in understanding is the etiology of the various meta-narratives and why some clients present one type over another.

Meta-Narratives of Ritual Abuse


In the time I have been doing this work, I have seen three basic meta-narratives to the ritual abuse "memories."
  • The first one is the one most people have heard of, satanic sexual abuse, and includes blood rituals, sacrifice of animals and infants, offerings of children as sexual objects to members of the "circle," and marriage of female children to satan or other demons.
  • The second one is a little less common, but shows up as having a Nazi or racist theme and structure, including child pornography, child prostitution, and child "breeding."
  • The third one involves a conspiracy by the United States government (MK-Ultra and its derivatives) to conduct secret mind control and manipulation experiments on American citizens (usually children), including induced dissociative identity disorder and the creation of super spies..  
Let's begin with the first and most common meta-narrative, satanic abuse.

The last eruption of this phenomenon into the larger society occurred in the 1980s and into the 1990s and focused on allegations of widespread satanic worship and ritual abuse of animals and children.

Many innocent people were charged with and convicted of crimes that had never happened. Many of the "recovered memories" the children presented were implanted into very suggestible minds by therapists who were ignorant of iatrogenic symptoms or had an emotional investment in "saving" these children from the hordes of satan.

According to Wikipedia's entry on Satanic Ritual Abuse, "Astrophysicist and astrobiologist Carl Sagan devoted an entire chapter of his last book, The Demon-Haunted World: Science as a Candle in the Dark (1996) to a critique of claims of recovered memories of UFO abductions and satanic ritual abuse and cited material from the newsletter of the False Memory Syndrome Foundation with approval.[62]"

The iatrogenic nature of the recovered memories used in court did more disservice to the subject of trauma memories than any other single event in the history of psychology. Survivors who, as children, naturally dissociated highly traumatic memories of abuse are now not believed when those memories return due to some form of trigger.

These recovered memories become problematic for the therapist, however, when they include satanic ritual abuse. [Please note, I have no doubt that ritual abuse exists, but the contexts in which it exists are open to discussion.] When these memories are recalled, some clients want to report to police, adding another layer of complexity to this issue.

Researchers have traditionally identified four forms of satanic ritual abuse:
  1. Cult-based ritualism in which the abuse had a spiritual or social goal for the perpetrators
  2. Pseudo-ritualism in which the goal was sexual gratification and the rituals were used to frighten or intimidate victims
  3. Psychopathological ritualism in which the rituals were due to mental disorders
  4. Crimes with ambiguous meaning (such as graffiti or vandalism) generally committed by teenagers but attributed to Satanic cults
The only ones of these I have any experience with (in my opinion) are numbers 2 and 4. Of these, number 2, "pseudo-ritualism," seems likely to be one of the more coherent explanations.

Satanic abuse is easily the oldest of the three major themes, with government and technology only becoming a theme following the Enlightenment (see A Visionary Madness for the history of the first "influencing machine" and its association with mental illness - likely PTSD with psychotic features). I would assume that racial meta-narrative is also quite old, but in the US it may not have been as prominent until the post-Reconstruction Era when the Ku Klux Klan emerged, and more likely until the Nazi holocaust against the Jews.

Embodiment of Evil


What all three of these meta-narratives have in common is the embodiment of "evil." Whether it's satan, Hitler, or the secret branches of government, each of these presents powerful evil as an explanatory factor for the sexual and physical abuse of children.

The Christian mythology of satan (the devil) is the easiest one to grasp because our society is based on Christian religious values. We might trace the fear of satanic cults back to the witch hunts in Europe during the Inquisition (witch trials began in the late 1400s).

[As an aside, there is also a tradition of Jewish cults centered around Kabbalah rituals that engage in child sexual abuse and sacrifice.]  

Despite the history, there has never been any real proof of witches or of organized satanic abuse (according to the FBI). This is from Wikipedia:
Kenneth Lanning, an FBI expert in the investigation of child sexual abuse,[150] has stated that pseudo-satanism may exist but there is "little or no evidence for ... large-scale baby breeding, human sacrifice, and organized satanic conspiracies".[46]
There are many possible alternative answers to the question of why victims are alleging things that don't seem to be true....I believe that there is a middle ground — a continuum of possible activity. Some of what the victims allege may be true and accurate, some may be misperceived or distorted, some may be screened or symbolic, and some may be "contaminated" or false. The problem and challenge, especially for law enforcement, is to determine which is which. This can only be done through active investigation. I believe that the majority of victims alleging "ritual" abuse are in fact victims of some form of abuse or trauma.[46]
Lanning produced a monograph in 1994 on SRA aimed at child protection authorities, which contained his opinion that despite hundreds of investigations no corroboration of SRA had been found. Following this report, several convictions based on SRA allegations were overturned and the defendants released.[54]
I suspect that even prior to the Christian era one tribe would fear another tribe and accuse them of molesting children (among other taboo violations). One of the dominant taboos in most, if not all, agricultural and post-agricultural societies is the one against adults having sex with children (incest and/or pedophilia). Granted, this taboo has never prevented such molestation.

With the rise of KKK influence in the 1920s, and then the Nazi racist agenda and the 1930s and 1940s, there was a "new" (racism and ethnocentrism are not new) embodiment of evil, a violent, racist, hate-based model of evil. An important aspect of this meta-narrative is racial purity, which plays out in some "recovered memories" of children being bred to produce more Aryans.

The history of technology/government conspiracy in mental illness goes back to shortly after the Enlightenment, as mentioned above (A Visionary Madness).

Following World War II, the Cold War and the rapid increase in psychopharmacology opened new doors of research and led to new efforts at bioengineering human beings. Beginning with Project Bluebird and Project Artichoke, MKUltra became the primary "special ops" program of the military and CIA.

Via Wikipedia:
Project MKUltra — sometimes referred to as the CIA's mind control program — is the code name of a U.S. government human research operation experimenting in the behavioral engineering of humans. Organized through the Scientific Intelligence Division of the Central Intelligence Agency (CIA), the project coordinated with the Special Operations Division of the U.S. Army's Chemical Corps.[1] The program began in the early 1950s, was officially sanctioned in 1953, was reduced in scope in 1964, further curtailed in 1967 and officially halted in 1973.[2] The program engaged in many illegal activities;[3][4][5] in particular it used unwitting U.S. and Canadian citizens as its test subjects, which led to controversy regarding its legitimacy.[3](p74)[6][7][8] MKUltra used numerous methodologies to manipulate people's mental states and alter brain functions, including the surreptitious administration of drugs (especially LSD) and other chemicals, hypnosis, sensory deprivation, isolation, verbal and sexual abuse, as well as various forms of torture.[9]
One challenge with this meta-narrative is that the government actually DID many of the things with which they have been charged and has either admitted to it or paid off accusers to keep them quiet. Moreover, their actions were supported by more than 80 institutions, including 44 colleges and universities, as well as hospitals, prisons and pharmaceutical companies.{Horrock, Nicholas M. (4 Aug 1977). 80 Institutions Used in C.I.A. Mind Studies: Admiral Turner Tells Senators of Behavior Control Research Bars Drug Testing Now. New York Times.}

Another issue is that the U.S. government was instrumental in the trials of Nazi doctors for their experimentation on human subjects, but then that same government experimented on its own citizens. Citizens in the U.S. have learned not to trust the government, and those prone to conspiracy thinking believe the government is still conducting experiments on human subjects, with theories ranging from "chemtrails" to HAARP (High Frequency Active Auroral Research Program) to water flouridation.

In those who subscribe to any of these meta-narratives, the identified groups, whether satanists, white supremacists, or the government, are all embodiments of evil.

Developmental Stages and Conceptions of Evil


Using the framework developed by Clare Graves and expanded by Beck and Cowan (1996), we might attribute each of the three meta-narratives to a specific worldview developmental stage.


The four stages of use are the magical/animistic (BO), the impulsive/egocentric (CP), the power/authoritarian (DQ), and the rational/strategic (ER).

[The first letter stands for the life conditions of a given stage, while the second letter stands for the biopsychosocial capacities developed to cope with those life conditions. When two stages are presented together, one is usually lower-case and one is upper-case. The stage that is dominant gets the upper-case listing. The combining of two stages indicates a transitional space between stages, and since stages are not concrete, there can be three stages listed. As an example, if you see BO/cp, the subject is transitioning from BO into cp, but more of the life conditions and/or coping skills remain in BO than have emerged as cp.]

Satanic ritual abuse

The superstition and magical beliefs of satanic abuse place its worldview in the magical/animistic stage, but there is some element of personal gain (egocentric drives) involved. In the Spiral Dynamics (SD) nomenclature, this would be defined as BO/cp - representing an early transitional period between magical and egocentric.

Nazi and Aryan themes

In addition to the egocentric and power-drive elements of the impulsive/egocentric stage, there is also a strong ethnocentric character to this stage. Within that ethnocentric drive is the belief that we (whoever is defined as "we") are God's people and anyone who is not like us is inferior and to be controlled, used, or slaughtered. In the SD nomenclature, this would be defined as CP/dq due to the underlying belief that race is a divinely given characteristic that defines one's value and role.

Technological and governmental conspiracies

Part of the worldview beneath this meta-narrative of sexual abuse is the belief in an all-powerful government that seeks control of its citizens through authoritarian power and technological manipulation. Again, this is a worldview that straddles two stages, the power/authoritarian (DQ) and the rational/strategic (ER), but the technology aspect is more prevalent, so the SD nomenclature would be dq/ER.

How this is useful


Being able to identify the subject's worldview allows us to better understand their ego development as well as, potentially, their cognitive, moral, and social development. Here is a graphic that makes correlations (not absolute in any way):


One of the things we notice here is that the magical stage correlates with symbolic thinking (preoperational) [Piaget] and with impulsive ego structures [Loevinger]. The egocentric stage correlates with conceptual cognitive skills (preoperational) and self-protective ego development. The power/authoritarian stage correlates with concrete operational cognitive skills and a conformist ego structure. Finally, the rational/strategic stage correlates with formal operational thinking, allowing for more complexity to their meta-narratives, and a self aware/conscientious ego stage, which is defined as demonstrating "an increase in self-awareness and the capacity to imagine multiple possibilities in situations" [Witherell, S., & Erickson, V.,(2001). "Teacher Education as Adult Development," Theory into Practice, 17(3), p.231].

While I hesitate to ever equate ontology with phylogeny, Jean Piaget favored a weaker version of the recapitulation theory, according to which ontogeny parallels phylogeny because the two are subject to similar external constraints, but they are not equivalent. Developmental psychology has been shown to fit within this framework - a child's cognitive development runs parallel to the cognitive development of the species through evolution [1].

Using this framework, it may be possible to use the meta-narrative of the abuse to help determine the age at which it was experienced. For example, early childhood abuse (prior to age 5) might be more likely to have a satanic theme because the child at this age still engages in magical and symbolic thinking and lacks the logic to "see through" efforts by the perpetrators to impose silence with demonic imagery and contexts.

Likewise, a child of 5-9 might be more likely to have a meta-narrative of Nazism or racism. These ages are defined by children forming peer-group cliques, often around interests or traits (segregation by race on many playgrounds).

A meta-narrative of MKUltra as the source of abuse is not likely to come from early childhood abuse - the ideas are too complicated and rational.

Conclusions


All of this is just me thinking out loud and trying to create a framework by which to better understand the narratives I hear from clients. It's always about understanding where the client is coming from and if this does not serve that purpose, then it is useless. That said, case conceptualization with survivors of ritual abuse is challenging at best, so any kind of framework that can help us make sense of their narratives is important.

I know there are many people who will reject the use of the Spiral Dynamics and integral frameworks in this conceptualization. So be it. I find the framework useful for this discussion. We need some kind of developmental system to help us make sense of clients' cognitive skills (Piaget, Commons), ego development (Loevinger, Cook-Greuter, Kegan), and values/worldviews (Graves/Beck and Cowan), among other lines of development. No other models are as inclusive as SD and integral theory.

I am very open to being wrong - so I welcome comments and criticisms.


NOTE

1. Foster, Mary LeCron (1994). "Symbolism: the foundation of culture". In Tim Ingold. Companion Encyclopedia of Anthropology. pp. p.387. Quotation:
While ontogeny does not generally recapitulate phylogeny in any direct sense (Gould 1977), both biological evolution and the stages in the child’s cognitive development follow much the same progression of evolutionary stages as that suggested in the archaeological record (Borchert and Zihlman 1990, Bates 1979, Wynn 1979) ... Thus, one child, having been shown the moon, applied the word ‘moon’ to a variety of objects with similar shapes as well as to the moon itself (Bowerman 1980). This spatial globality of reference is consistent with the archaeological appearance of graphic abstraction before graphic realism.