Showing posts with label literature. Show all posts
Showing posts with label literature. Show all posts

Thursday, October 09, 2014

Salon Culture: Network of Ideas - A Conversation with Andrian Kreye

http://upload.wikimedia.org/wikipedia/commons/c/c6/Abraham_Bosse_Salon_de_dames.jpg

In 2001, the LA Times wrote about the emergence of a new salon culture in Los Angeles, frequented by writers, filmmakers, and actors. This phenomenon is the re-emergence of a salon culture, which began originally in the 16the century in Italy, but is most often associated with the 17th and 18th century literary culture of France.

In 2011, both Alternet (US) and The Telegraph (UK) did articles on salon culture. This is a bit of the history of American salons from the Alternet article:
The modern salon formally emerged in New York during the early 20th century. Edith Wharton, who loathed the American literary scene and resettled in Paris in 1907, likely attended the intellectual gatherings hosted by her sister-in-law, Mary Cadwalader Jones, on East 11th Street. In 1900, Jones gained national prominence championing the role of nurses in public health, fiercely arguing for the professionalization of a traditionally female vocation. She enjoyed intellectual life and hosted "Mary Cadwal's parlor” at which many leading intellectual lights of the day were regulars, including the writers Henry James, Henry Adams and F. Marion Crawford, the painters John LaFarge and John Singer Sargent, and the sculptor Augustus Saint-Gaudens.

However, it was Mabel Dodge’s famous “Evenings,” hosted at her townhouse at 23 Fifth Avenue during the 1910s, that made salons part of the city’s social life. Dodge was a classic Gilded Age “poor little rich girl,” a spoiled dilettante and libertine who, until she found her calling, attached herself to the latest fad and male celebrity. In 1913 she helped organize the controversial International Show of Modern Art, popularly known as the Armory Show, which launched modern art in America. That same year, she joined John Reed, “Big Bill” Hayward and Emma Goldman in support of the IWW-backed silk workers strike in Paterson, NJ, playing a leading role organizing the controversial, “Pageant of the Paterson Strike,” held at Madison Square Garden.

Dodge’s salons were organized along the lines of the traditional discussion-group format known as the General Conversation. An appointed leader, normally a specialist in an artistic, academic or political subject, offered a brief introductory commentary focusing the discussion and then invited those in attendance to jump into the discussion. Salon leaders ranged from A. A. Brill on Sigmund Freud and psychoanalysis, Reed on Pancho Villa and the Mexican Revolution, Margaret Sanger on birth control and women’s rights and even African-American entertainers from Harlem.

As the scholar Andrea Barnet reminds us, “Dodge’s salon was where black Harlem first met Greenwich Village bohemia and, conversely, where white bohemia got its first taste of a parallel black culture that it would soon not only glorify but actively try to emulate.”
I wish there were something like this in Tucson today. It would be awesome to meet up with a group of intelligent and educated people to exchange ideas, explore new topics, and generally hear new ideas or new perspectives.

I said this out loud the other night, so my girlfriend immediately mentioned to a friend on Facebook, and he and his wife like the idea, so maybe it will happen. And as tradition holds, the salon is often hosted by a female, the Salonnière.

The article below is about one of the major ongoing salons in the 20th-21st century - the Edge Salons hosted by John Brockman.

Salon Culture: Network of Ideas

A Conversation with Andrian Kreye [10.2.14]


Despite their intense scientific depth, John Brockman runs these gatherings with the cool of an old school bohemian. A lot of these meetings indeed mark the beginning of a new phase in science history. One such example was a few years back, when he brought together the luminaries on behavioral economics, just before the financial crisis plunged mainstream economics into a massive identity crisis. Or the meeting of researchers on the new science of morality, when it was noted that the widening political divides were signs of the disintegration of American society. Organizing these gatherings over summer weekends at his country farm he assumes a role that actually dates from the 17th and 18th century, when the ladies of the big salons held morning and evening meetings in their living rooms under the guise of sociability, while they were actually fostering the convergence of the key ideas of the Enlightenment.


Salon Culture
NETWORK OF IDEAS


The Salon was the engine of enlightenment. Now it's coming back. In the digital era the question might be different from the ones in the European cities of the 17th century. The rules are the same. Why is there such a great desire to spend some hours with likeminded peers in this age of the internet?

by Andrian Kreye, Editor, The Feuilleton (Arts & Essays), Süddeutsche Zeitung, Munich.

The salon, which marked a entire era: Duchess Anna Amalia of Saxe-Weimar with her guests, including Goethe (third from left) and Herder (far right).

For more than a century now the salon as a gathering to exchange ideas has been a footnote of the history of ideas. With the advent of truly mass media this exchange had first been democratized, then in rapid and parallel changes diluted, radicalized, toned down, turned up, upside and down again. It has only been recently that a longing emerged for those afternoons in the grand suites of the socialites in the Paris, Vienna, Berlin or Weimar of centuries past, where streams of thought turned into tides of history, where refined social gatherings of the cultured elites became the engine of the Enlightenment.

Just like back then, today's new salons are mostly exclusive if not closed circles. If you do happen to be invited though you will swiftly notice the intellectual force of those gatherings. On a summer's day on Eastover Farm in Connecticut for example, in the middle of green rolling hills with horse paddocks and orchards under the sunny skies of New England. This is where New York literary agent John Brockman spends his weekends. Once a year, he invites a small group of scientists, artists and intellectuals who form the backbone of what is called the Third Culture. Which is less of a new culture, but a new form of debate across all disciplines traditionally divided into the humanities and the natural sciences, i.e.,  the first and second culture.

On that weekend, for example, he had invited a half-dozen men. Each of whom had a large footprint in their respective disciplines: the gene researcher Craig Venter, who was the first to sequence the human genome; his colleague George Church, Robert Shapiro, who explored the chemistry of DNA, the astronomer Dimitar Sasselov, quantum physicist Seth Lloyd, and the physicist Freeman Dyson, who sees in his his role as scientist the need to continually question universally accepted truths. A few science writers were also present, along with Deborah Triesman, literary editor at the New Yorker.

At some of his other meetings, the number of Nobel Laureates might have been higher, but the question under discussion in the warm summer wind among rustling tops of maple trees with jugs full of freshly made lemonade, carried utmost weight: "What is life?" Seth Lloyd formulated the problem right at the start: science knows everything about the origin of the universe, but almost nothing about the origin of life. Without this knowledge, the sciences, on the threshold of the biological age, are groping in the dark.

Brockman had deliberately chosen the invited scientists as representatives of different fields, who, for years, had understood the need to think across the scientific disciplines. But even then, you could feel like an outsider, as was the case when Robert Shapiro made a joke about ribonucleic acids, which was greeted with boisterous laughter by the scientists.

Despite their intense scientific depth, John Brockman runs these gatherings with the cool of an old school bohemian. A lot of these meetings indeed mark the beginning of a new phase in science history. One such example was a few years back, when he brought together the luminaries on behavioral economics, just before the financial crisis plunged mainstream economics into a massive identity crisis. Or the meeting of researchers on the new science of morality, when it was noted that the widening political divides were signs of the disintegration of American society.  Organizing these gatherings over summer weekends at his country farm he assumes a role that actually dates from the 17th and 18th century, when the ladies of the big salons held morning and evening meetings in their living rooms under the guise of sociability, while they were actually fostering the convergence of the key ideas of the Enlightenment.


Not all salonnières were content to play the host role—Johanna Schopenhauer (the mother of the philosopher Arthur, here with her daughter Adele) was a significant writer with an extensive oeuvre.

The salon is still regarded as a mysterious world of thoughts and ideas, a world in which the participants soon were consigned to the role of historical figures in history books. In the early days of the salon culture these meetings were incubators of new ideas as well as the first form an urban and bourgeois culture. The first salons were formed in Paris in the early 17th century, when the nobles left their estates and are gathered in the capital around the King. Initially, they cemented these early manifestations of bourgeois culture such as music and literature. But soon philosophers such as Voltaire and Diderot appeared in the 18th century and prepared the intellectual ground for the French revolution.

In all major cities in Europe, it soon was common for ladies of high society to gather influential thinkers around them. Often, these were for their time radical gatherings, because those salons dissolved the rigid boundaries between social classes. With rational thinking of the Enlightenment, the reputation enjoyed by a person was measured in terms of intellect, not status or wealth. Berlin and Vienna were established, next to Paris, as cities of culture of the salon. But in small towns too, the intellectual life soon revolved around salons. The salons in Weimar were legendary, where Johanna Schopenhauer, the mother of the future philosopher, Arthur, and the Duchess Anna Amalia of Saxe-Weimar-Eisenach, counted Johann Wolfgang von Goethe and Friedrich Schiller among their guests.


At the end of the 18th century, the revolutionary spirit was present in the salons of Caroline Schelling in Mainz. The Prussian military arrested Schelling in 1793 for her links to the Jacobins.

At the same time England developed the first coffee house culture. In 1650, the first English cafe, called Grand Café, opened in Oxford. The open structure of the cafés had a tremendous effect on the culture of debate, but so did coffee and tea, the new drinks from the colonies. In a country in which the entire population at any time of day was drinking alcohol, the stimulant of caffeine acted as fertilizer for the burgeoning idea cultures. But it was mostly the lounges and cafes in Europe (and later America) that gave birth to the fundamental principle of progress and innovation, namely the network. Indeed, it was rarely the sudden Eureka-moments in the solitude of the laboratory of the study, that scientists and thinkers brought humanity from the dark times of the pre-modern era into the light of reason.  It was the fierce debates held in the lounges and cafes that allowed the ideas behind these Eureka-moments to mature.


The salon of the Duchess Anna Amalia  was called "Garden of the Muses." In addition to her role as salonnière, the Duchess was also generous patron of Goethe and Schiller.

No wonder that the nostalgia for these meetings between big thinkers is so strong today. With his 2010 film "Midnight in Paris", Woody Allen, the greatest of the urban romantics, created a cinematic monument to this nostalgia. As the American author Gil Pender roams the nighttime streets and alleys of Paris, he accidentally falls into a time portal and lands in the Paris of the 1920s. There, in the rooms of the writer and collector Gertrude Stein with walls covered in works of art, he meets Pablo Picasso, F. Scott Fitzgerald, Ernest Hemingway, Salvador Dalí and Luis Buñuel. This is a tribute to the small world of bohemians who gave birth to so many great things in the history of culture.

This nostalgia fits perfectly in an age when the mass media abandon models of publications and programs to turn into networks with an infinite number of nodes. Facebook, Twitter and countless blogs and forums perfectly simulate this exciting exchange of ideas for an audience of billions. In terms of today's digital Weltgeist, there is already talk about the global salon, and a universal brain. Could it be that nostalgic interest in the salons of the past is a desire for more clarity to face the complexity of the networked future?

In the digital era we might very well witness once again the phenomenon that Jürgen Habermas has called "structural transformation of the public sphere", the rise of a new bourgeoisie and mass society that began with the salons. There is no across-the-board answer to this question, that's impossible when the structural transformation of the digital age affects various spheres of the international community differently. In Europe and America, digital media always leads to new cul-de-sacs and roundabouts of communication. Social networks claim to be not only the successors of salons, they evoke the ominous metaphysical principle of the Weltgeist (global mind), while they actually reduce the principle of intellectual eruptions in salons to a de-intellectualized white noise.


Salon of the 21st century: the literary agent John Brockman (Center, with Hat) in the circle of the scientists of the Edge network during one of his legendary weekends at Eastover Farm in Connecticut.

In emerging and developing countries on the other hand, the use of digital media has indeed made Habermas's structural transformation of the public possible, in much the same way as in the Europe of the Enlightenment in terms of the salons and the early mass media. In countries like Iran, Egypt or the Ukraine, each change begins with dangerous ideas, because if ideas are to make a difference, they must be dangerous.

This was no different in the early salons. If the great intellectuals and artists of the time met in the literary salons, it was by no means solely to discuss questions of aesthetics or literary forms. In the late 18th century salons of the woman of letters Caroline Schelling, for example, in Mainz and Göttingen, were collecting revolutionary spirits who took a stand in Paris, at the dawn of a new era that brought the demise of the monarchy. Caroline Schelling was arrested, slandered, vilified, but it did not change the fact that, under her leadership, the Jacobins eventually formed in Germany as well as a force opposing the monarchy and empire.

This is the very reason that an autocracy such as China uses its power to promote the social concept of the individual, because a single individual cannot spread dangerous ideas. This fear of the power of networks also explains the unusually harsh persecution of religious communities. It's bad enough that faith calls into question the sovereignty of the party on thinking. However there is a danger for power is also lurking in the networks of churches and monasteries. Faith calls into questions the sovereignty of the party line of thinking and thoughts. Danger lurks for those in power in the networks of churches and monasteries.

In the birthplaces of the enlightenment, in America and Europe, the current struggle for sovereignty over interpretation is not a political fight though—this has dissolved since the end of ideologies in countless, often regional micro-conflicts. Similarly, the battle between religion and science has been in play for a long time. Yet it is science that challenges the certainties.


The Internet has the possibility to enlarge the circle of great minds that exchange ideas ad infinitum. To not get lost in the vastness of cyberspace, thinkers and creators have started to meet again on a regular basis for various new forms of salons like DLD; the Aspen Ideas Forum or the TED Conference. What started as an elite gathering of Silicon Valley pioneers thirty years ago has turned into a global forum of ideas, which are spread via internet videos of lectures and talks. Twice a year about a thousand scientists, artists, activist and entrepreneur come together in one place like Monterey, Vancouver, Oxford or Rio, to learn about new ideas "worth spreading" to quote the motto of the conference. In a lot of cases, such ideas will have an impact on the world for years on end.

At this point, the memory of that summer day in Connecticut comes into focus, and the moment when the scientists asking questions about the origin of life talked about their research and projects. Craig Venter told of his plans to develop bacteria that could supplant fossil fuels as an energy source. George Church described the sequencing of the genome of the Mammoth. Dimitar Sasselov reported by his search for Earth-like planets. Seth Lloyd explained the unprecedented opportunities of the quantum computer.  What, for the onlookers under the maple trees only a few years ago sounded like science fiction, is today, to a large extent, scientific reality.


In the New York of the 1960s, hardly anyone understood the network of eccentric artist Andy Warhol, as seen here with John Brockman (left) and Bob Dylan (right) in the "Factory", a hybrid of salon, studio, and party room.

Of course, John Brockman long ago put his salon online. Leading scientists, artists, prominent intellectuals, regularly meet on his edge.org website to have a conversation about the issues of our time. Annually, there is a concerted action in which he asks the entire network a big question. Eight years ago, the following was central issue for this salon culture: "What is your most dangerous idea?" More than a hundred responses were submitted and published. It reads like intellectual fireworks. In your own head, you quickly feel for yourself how ideas clash, release energy and generate new ideas. It is then that you experience the intellectual thrill that has always inspired the salons.
In the meantime, Brockman's arena of ideas has sparked countless likeminded gatherings of all scales and fields. Conferences have been established as a distinct independent form of communication, because the network tends to be significantly more effective and fruitful beyond the Internet. Other than the observable external format, events such as the TED Conferences, the Aspen Ideas Festival, PopTech, or the Digital Life Design (DLD), have little in common with the congresses and meetings of old. They have long since become the new crucibles in the history of ideas. Especially the American TED Conference has shown in recent years the way in the salon of the 21st century can evolve. What started in 1984 as a meeting of Silicon Valley elites under the banner "Technology, Entertainment, Design", is now a global network that utilizes all channels of communication—conferences, online videos, books, TV, radio, blogs to make ideas blossom and develop on a global scale. Twice a year, a small circle from this large network meets in a cosmopolitan city ... in the spirit of the salons of yesteryear.

Monday, June 16, 2014

How to Read James Joyce's Ulysses on Bloomsday

Via Open Culture, here is a quick guide to everything you need to enjoy reading James Joyce's Ulysses on this, June 16 (1904), the day of the action in the book, otherwise known as Bloomsday. Interestingly, 1904 was the last year that Joyce was to see Dublin, the city of his youth. And June 16 was the day on which he and Nora, who would become his wife, first went out on a date.

The only novel in the history of literature more daunting to most readers than Ulysses is Joyce's last novel, Finnegan's' Wake, the companion to Ulysses. In Joyce's mind, Ulysses was the "daytime" book, but Finnegan's Wake is the nighttime book, a nearly impenetrable text, written over a period of 17 years (with the assistance of Samuel Beckett as typist) and written in an idiosyncratic language that is almost code more than language.

In comparison, Ulysses is a piece of cake.

If you were in Dublin today, you could take a tour of all of the place Leopold Bloom visited through the course of the novel. It's amazing the Joyce was able to remember Dublin so precisely despite not having lived there in the two decades prior to writing the book.

Everything You Need to Enjoy Reading James Joyce’s Ulysses on Bloomsday

June 16th, 2014


Since its publication in 1922, James Joyce’s Ulysses has enjoyed a status, in various places and in various ways, as The Book to Read. Alas, this Modernist novel of Dublin on June 16, 1904 has also attained a reputation as The Book You Probably Can’t Read — or at least not without a whole lot of work on the side. In truth, nobody needs to turn themselves into a Joyce scholar to appreciate it; the uninitiated reader may not enjoy it on every possible level, but they can still, without a doubt, get a charge from this piece of pure literature. Today, on this Bloomsday 2014, we offer you everything that may help you get that charge, starting with Ulysses as a free eBook (iPad/iPhone - Kindle + Other Formats - Read Online Now). Or perhaps you’d prefer to listen to the novel as a free audio book; you can even hear a passage read by Joyce himself.


The work may stand as a remarkably rich textual achievement, but it also has a visual history: we’ve previously featured, for instance, Henri Matisse’s illustrated 1935 edition of the book, Joyce’s own sketch of protagonist Leopold Bloom (below), and Ulysses “Seen,” a graphic novel adaptation-in-progress.


Even Vladimir Nabokov, obviously a formidable literary power himself, added to all this when he sketched out a map of the paths Bloom and Stephen Dedalus (previously seen in Joyce’s A Portrait of the Artist as a Young Man) take through Dublin in the book.


Other high-profile Ulysses appreciators include Stephen Fry, who did a video expounding upon his love for it, and Frank Delaney, whose podcast Re: Joyce, as entertaining as the novel itself, will examine the entire text line-by-line over 22 years. Still, like any vital work of art, Ulysses has drawn detractors as well. Irving Babbitt, among the novel’s early reviewers, said it evidenced “an advanced stage of psychic disintegration”; Virginia Woolf, having quit at page 200, wrote that “never did any book so bore me.” But bored or thrilled, each reader has their own distinct experience with Ulysses, and on this Bloomsday we’d like to send you on your way to your own. (Or maybe you have a different way of celebrating, as the first Bloomsday revelers did in 1954.) Don’t let the towering novel’s long shadow darken it. Remember the whole thing comes down to an Irishman and his manuscripts — many of which you can read online.


Related Content:
Colin Marshall hosts and produces Notebook on Cities and Culture and writes essays on cities, language, Asia, and men’s style. He’s at work on a book about Los Angeles, A Los Angeles Primer. Follow him on Twitter at @colinmarshall or on Facebook.

Monday, March 24, 2014

Omnivore - A Reconfiguration of Critical Theory

From Bookforum's Omnivore blog, here is another interesting and informative collection of links, this time on critical theory and, by extension, philosophy.

One of the more interesting articles is an energetic attack on Jurgen Habermas, one of the pet theorists of Wilberian integral theory. The abstract is posted below.


A reconfiguration of critical theory

Mar 20 2014
9:00AM

* * * * *

Law & Critique (2014, Forthcoming)
Abstract:

Habermas' cosmopolitan project seeks to transform global politics into an emancipatory activity in order to compensate for the disempowering effects of globalization. The project is traced through three vicious circles which stem from Habermas' commitment to intersubjectivity. Normative politics always raises a vicious circle because politics is only needed to the extent that an issue has become problematized through want of intersubjective agreement. At the domestic level Habermas solves this problem by constitutionalizing transcendental presuppositions political participants cannot avoid making. This fix will not work at the global level because it is pre-political as between human individuals. Habermas therefore premises cosmopolitics on the transformation of nation-states into sites of participatory politics, engagement in which will eventually ignite a global cosmopolitan consciousness. This transformation depends on the constitutionalization of existing UN structures and their enforcement of an undefined and (therefore) 'uncontroversial' core of human rights. Unable to ground this project in social practice, Habermas eventually disregards his own lodestar of intersubjectivity based in social practice by relying on the prediscursive concept of human dignity. This move is not merely philosophically inconsistent, it also opens the door to the moralization of politics and the imposition of human rights down the barrel of a gun.


Download the PDF.

Thursday, January 30, 2014

David Cronenberg's Introduction to Kafka's Metamorphosis

Who could possibly be better to write an introduction to a new edition of Franz Kafka's classic novella, Metamorphosis than David Cronenberg? No one, that's who.

From The Paris Review:

The Beetle and the Fly


January 17, 2014 | by David Cronenberg

From the original cover of Kafka’s Die Verwandlung, 1915.

I woke up one morning recently to discover that I was a seventy-year-old man. Is this different from what happens to Gregor Samsa in The Metamorphosis? He wakes up to find that he’s become a near-human-sized beetle (probably of the scarab family, if his household’s charwoman is to be believed), and not a particularly robust specimen at that. Our reactions, mine and Gregor’s, are very similar. We are confused and bemused, and think that it’s a momentary delusion that will soon dissipate, leaving our lives to continue as they were. What could the source of these twin transformations possibly be? Certainly, you can see a birthday coming from many miles away, and it should not be a shock or a surprise when it happens. And as any well-meaning friend will tell you, seventy is just a number. What impact can that number really have on an actual, unique physical human life?

In the case of Gregor, a young traveling salesman spending a night at home in his family’s apartment in Prague, awakening into a strange, human/insect hybrid existence is, to say the obvious, a surprise he did not see coming, and the reaction of his household—mother, father, sister, maid, cook—is to recoil in benumbed horror, as one would expect, and not one member of his family feels compelled to console the creature by, for example, pointing out that a beetle is also a living thing, and turning into one might, for a mediocre human living a humdrum life, be an exhilarating and elevating experience, and so what’s the problem? This imagined consolation could not, in any case, take place within the structure of the story, because Gregor can understand human speech, but cannot be understood when he tries to speak, and so his family never think to approach him as a creature with human intelligence. (It must be noted, though, that in their bourgeois banality, they somehow accept that this creature is, in some unnamable way, their Gregor. It never occurs to them that, for example, a giant beetle has eaten Gregor; they don’t have the imagination, and he very quickly becomes not much more than a housekeeping problem.) His transformation seals him within himself as surely as if he had suffered a total paralysis. These two scenarios, mine and Gregor’s, seem so different, one might ask why I even bother to compare them. The source of the transformations is the same, I argue: we have both awakened to a forced awareness of what we really are, and that awareness is profound and irreversible; in each case, the delusion soon proves to be a new, mandatory reality, and life does not continue as it did.

Is Gregor’s transformation a death sentence or, in some way, a fatal diagnosis? Why does the beetle Gregor not survive? Is it his human brain, depressed and sad and melancholy, that betrays the insect’s basic sturdiness? Is it the brain that defeats the bug’s urge to survive, even to eat? What’s wrong with that beetle? Beetles, the order of insect called Coleoptera, which means “sheathed wing” (though Gregor never seems to discover his own wings, which are presumably hiding under his hard wing casings), are notably hardy and well adapted for survival; there are more species of beetle than any other order on earth. Well, we learn that Gregor has bad lungs they are “none too reliable”—and so the Gregor beetle has bad lungs as well, or at least the insect equivalent, and perhaps that really is his fatal diagnosis; or perhaps it’s his growing inability to eat that kills him, as it did Kafka, who ultimately coughed up blood and died of starvation caused by laryngeal tuberculosis at the age of forty. What about me? Is my seventieth birthday a death sentence? Of course, yes, it is, and in some ways it has sealed me within myself as surely as if I had suffered a total paralysis. And this revelation is the function of the bed, and of dreaming in the bed, the mortar in which the minutiae of everyday life are crushed, ground up, and mixed with memory and desire and dread. Gregor awakes from troubled dreams which are never directly described by Kafka. Did Gregor dream that he was an insect, then awake to find that he was one? “‘What in the world has happened to me?’ he thought.” “It was no dream,” says Kafka, referring to Gregor’s new physical form, but it’s not clear that his troubled dreams were anticipatory insect dreams. In the movie I co-wrote and directed of George Langelaan’s short story The Fly, I have our hero Seth Brundle, played by Jeff Goldblum, say, while deep in the throes of his transformation into a hideous fly/human hybrid, “I’m an insect who dreamt he was a man and loved it. But now the dream is over, and the insect is awake.” He is warning his former lover that he is now a danger to her, a creature with no compassion and no empathy. He has shed his humanity like the shell of a cicada nymph, and what has emerged is no longer human. He is also suggesting that to be a human, a self-aware consciousness, is a dream that cannot last, an illusion. Gregor too has trouble clinging to what is left of his humanity, and as his family begins to feel that this thing in Gregor’s room is no longer Gregor, he begins to feel the same way. But unlike Brundle’s fly self, Gregor’s beetle is no threat to anyone but himself, and starves and fades away like an afterthought as his family revels in their freedom from the shameful, embarrassing burden that he has become.



Jeff Goldblum in Cronenberg’s The Fly, 1986.

When The Fly was released in 1986, there was much conjecture that the disease that Brundle had brought on himself was a metaphor for AIDS. Certainly I understood this—AIDS was on everybody’s mind as the vast scope of the disease was gradually being revealed. But for me, Brundle’s disease was more fundamental: in an artificially accelerated manner, he was aging. He was a consciousness that was aware that it was a body that was mortal, and with acute awareness and humor participated in that inevitable transformation that all of us face, if only we live long enough. Unlike the passive and helpful but anonymous Gregor, Brundle was a star in the firmament of science, and it was a bold and reckless experiment in transmitting matter through space (his DNA mixes with that of an errant fly) that caused his predicament.

Langelaan’s story, first published in Playboy magazine in 1957, falls firmly within the genre of science fiction, with all the mechanics and reasonings of its scientist hero carefully, if fancifully, constructed (two used telephone booths are involved). Kafka’s story, of course, is not science fiction; it does not provoke discussion regarding technology and the hubris of scientific investigation, or the use of scientific research for military purposes. Without sci-fi trappings of any kind, The Metamorphosis forces us to think in terms of analogy, of reflexive interpretation, though it is revealing that none of the characters in the story, including Gregor, ever does think that way. There is no meditation on a family secret or sin that might have induced such a monstrous reprisal by God or the Fates, no search for meaning even on the most basic existential plane. The bizarre event is dealt with in a perfunctory, petty, materialistic way, and it arouses the narrowest range of emotional response imaginable, almost immediately assuming the tone of an unfortunate natural family occurrence with which one must reluctantly contend.

Stories of magical transformations have always been part of humanity’s narrative canon. They articulate that universal sense of empathy for all life forms that we feel; they express that desire for transcendence that every religion also expresses; they prompt us to wonder if transformation into another living creature would be a proof of the possibility of reincarnation and some sort of afterlife and is thus, however hideous or disastrous the narrative, a religious and hopeful concept. Certainly my Brundlefly goes through moments of manic strength and power, convinced that he has combined the best components of human and insect to become a super being, refusing to see his personal evolution as anything but a victory even as he begins to shed his human body parts, which he carefully stores in a medicine cabinet he calls the Brundle Museum of Natural History.

There is none of this in The Metamorphosis. The Samsabeetle is barely aware that he is a hybrid, though he takes small hybrid pleasures where he can find them, whether it’s hanging from the ceiling or scuttling through the mess and dirt of his room (beetle pleasure) or listening to the music that his sister plays on her violin (human pleasure). But the Samsa family is the Samsabeetle’s context and his cage, and his subservience to the needs of his family both before and after his transformation extends, ultimately, to his realization that it would be more convenient for them if he just disappeared, it would be an expression of his love for them, in fact, and so he does just that, by quietly dying. The Samsabeetle’s short life, fantastical though it is, is played out on the level of the resolutely mundane and the functional, and fails to provoke in the story’s characters any hint of philosophy, meditation, or profound reflection. How similar would the story be, then, if on that fateful morning, the Samsa family found in the room of their son not a young, vibrant traveling salesman who is supporting them by his unselfish and endless labor, but a shuffling, half-blind, barely ambulatory eighty-nine-year-old man using insectlike canes, a man who mumbles incoherently and has soiled his trousers and out of the shadowland of his dementia projects anger and induces guilt? If, when Gregor Samsa woke one morning from troubled dreams, he found himself transformed right there in his bed into a demented, disabled, demanding old man? His family is horrified but somehow recognize him as their own Gregor, albeit transformed. Eventually, though, as in the beetle variant of the story, they decide that he is no longer their Gregor, and that it would be a blessing for him to disappear.

When I went on my publicity tour for The Fly, I was often asked what insect I would want to be if I underwent an entomological transformation. My answers varied, depending on my mood, though I had a fondness for the dragonfly, not only for its spectacular flying but also for the novelty of its ferocious underwater nymphal stage with its deadly extendable underslung jaw; I also thought that mating in the air might be pleasant. Would that be your soul, then, this dragonfly, flying heavenward? came one response. Is that not really what you’re looking for? No, not really, I said. I’d just be a simple dragonfly, and then, if I managed to avoid being eaten by a bird or a frog, I would mate, and as summer ended, I would die.

This essay appears as the introduction to Susan Bernofsky’s new translation of The Metamorphosis.

David Cronenberg is a Canadian filmmaker whose career has spanned more than four decades. Cronenberg’s many feature films include Stereo, Crimes of the Future, Fast Company, The Brood, The Dead Zone, The Fly, Naked Lunch, M. Butterfly, Crash, A History of Violence, and A Dangerous Method. His most recent film, Cosmopolis, starred Robert Pattinson and was an adaptation of Don DeLillo’s 2003 novel. Consumed, his first novel, will be published in September.

Saturday, June 15, 2013

Santayana on the Appreciation of Beauty - The Partially Examined Life, #77


George Santayana (1863-1952) was the first and maybe still the foremost Hispanic-American philosopher (as a student he worked under William James at Harvard). His embrace of naturalism and rejection of idealism were the foundation for a spiritual philosophy not based in religion. Here is some brief info on his life from the Stanford Encyclopedia of Philosophy:
Philosopher, poet, literary and cultural critic, George Santayana is a principal figure in Classical American Philosophy. His naturalism and emphasis on creative imagination were harbingers of important intellectual turns on both sides of the Atlantic. He was a naturalist before naturalism grew popular; he appreciated multiple perfections before multiculturalism became an issue; he thought of philosophy as literature before it became a theme in American and European scholarly circles; and he managed to naturalize Platonism, update Aristotle, fight off idealisms, and provide a striking and sensitive account of the spiritual life without being a religious believer. His Hispanic heritage, shaded by his sense of being an outsider in America, captures many qualities of American life missed by insiders, and presents views equal to Tocqueville in quality and importance. Beyond philosophy, only Emerson may match his literary production. As a public figure, he appeared on the front cover of Time (3 February 1936), and his autobiography (Persons and Places, 1944) and only novel (The Last Puritan, 1936) were the best-selling books in the United States as Book-of-the-Month Club selections. The novel was nominated for a Pulitzer Prize, and Edmund Wilson ranked Persons and Places among the few first-rate autobiographies, comparing it favorably to Yeats's memoirs, The Education of Henry Adams, and Proust's Remembrance of Things Past. Remarkably, Santayana achieved this stature in American thought without being an American citizen. He proudly retained his Spanish citizenship throughout his life. Yet, as he readily admitted, it is as an American that his philosophical and literary corpuses are to be judged. 
On The Partially Examined Life podcast, they discuss one of his classic books: The Sense of Beauty: Being the Outline of Aesthetic Theory (paper) or The Sense of Beauty Being the Outlines of Aesthetic Theory (Kindle, $0.00).

Episode 77: Santayana on the Appreciation of Beauty



Podcast: Play in new window | Download (Duration: 1:47:42 — 98.7MB)

On George Santayana’s The Sense of Beauty (1896)


What are we saying when we call something “beautiful?” Are we pointing out an objective quality that other people (anyone?) can ferret out, or just essentially saying “yay!” without any logic necessarily behind our exclamation? The poet and philosopher Santayana thought that while aesthetic appreciation is an immediate experience–we don’t “infer” the beauty of something by recognizing some natural qualities that it has–we can nonetheless analyze the experience after the fact to uncover a number of grounds on which we might appreciate something. He divides these into areas of matter (e.g. the pretty color or texture), form (the relations between perceived parts), and expression (what external to the work itself does it bring to mind?) and ends up being able to distinguish high art (form-centric) from more savage forms (centered on matter or expression) while distinguishing real appreciation (which can include any of the three elements) from mere pretension (when you don’t really have an immediate experience at all but merely recognize that you’re supposed to think that this is good).

The regular foursome talk through Santayana’s theory with regard to expressionist painting, rock ‘n roll, beautiful landscapes, abstract expressionism, and more. Read more about the topic and get the book.

End song: “Sense of Beauty” by Mark Lint with help from some PEL listeners. Read about it.

Please go to partiallyexaminedlife.com/donate to help support our efforts. A recurring gift will gain you all the benefits of PEL Citizenship. Thanks!

Sunday, May 12, 2013

Alissa Quart - Is Applying Neuroscience to the Study of Literature the Best Way to Read a Novel?


I wonder how my life would have been different if the field of neurohumanities existed when I was in school this first time, getting my master's degree in humanities. At this point I'm not sure how I feel about neuroscience invading literature, poetry, and art.

Adventures in Neurohumanities

Applying neuroscience to the study of literature is fashionable. But is it the best way to read a novel?



Alissa Quart
May 7, 2013 | This article appeared in the May 27, 2013 edition of The Nation.

Brain scan image from Libertas Academica

At Stanford University in 2012, a young literature scholar named Natalie Phillips oversaw a big project: a new way of studying the nineteenth-century novelist Jane Austen. No surprise there—Austen, a superstar of English literature and the inspiration for an endless array of Hollywood and BBC productions based on her work, has been the subject of thousands of scholarly papers.

But the Stanford study was different. Phillips used a functional magnetic resonance imaging (fMRI) machine to track the blood flow of readers’ brains when they read Mansfield Park. The subjects—mostly graduate students—were asked to skim an excerpt and then read it closely. The results were part of a study on reading and distraction.

The “neuro novel” story was quickly picked up by the mainstream media, from NPR to The New York Times. But the Austen project wasn’t merely a clever one-off—the brainchild, so to speak, of one imaginatively interdisciplinary scholar. And it wasn’t just the result of ambitious academics crossing brain science with “the marriage plot” in unholy matrimony simply to grab headlines. The Stanford study reflects a real trend in the humanities. At Yale University, Lisa Zunshine, now a literature scholar at the University of Kentucky, was part of a research team that studied modernist authors using fMRI, also in order to better understand reading. Rather than a cramped office or library carrel, the researchers got to use the Haskins Laboratory in New Haven, with funding by the Teagle Foundation, to carry out their project, in which twelve participants were given texts with higher and lower levels of complexity and had their brains monitored.

Duke and Vanderbilt universities now have neuroscience centers with specialties in humanities hybrids, from “neurolaw” onward: Duke has a Neurohumanities Research Group and even a neurohumanities abroad program. The money is serious as well. Semir Zeki, a neuroaesthetics specialist—that is, neuroscience applied to the study of visual art—was the recipient of a £1 million grant in the United Kingdom. And there are conferences aplenty: in 2012, you could have attended the aptly named Neuro-Humanities Entanglement Conference at Georgia Tech.

Neurohumanities has been positioned as a savior of today’s liberal arts. The Times is able to ask “Can ‘Neuro Lit Crit’ Save the Humanities?” because of the assumption that literary study has descended into cultural irrelevance. Neurohumanities, then, is an attempt to provide the supposedly loosey-goosey art and lit crowds with the metal spines of hard science.

The forces driving this phenomenon are many. Sure, it’s the result of scientific advancement. It’s also part of an interdisciplinary push into what is broadly termed the digital humanities, and it can be seen as offering an end run around intensifying funding challenges in the humanities. As Columbia University historian Alan Brinkley wrote in 2009, the historic gulf between funding for science and engineering on the one hand and the humanities on the other is “neither new nor surprising. What is troubling is that the humanities, in fact, are falling farther and farther behind other areas of scholarship.”

Neurohumanities offers a way to tap the popular enthusiasm for science and, in part, gin up more funding for humanities. It may also be a bid to give more authority to disciplines that are more qualitative and thus are construed, in today’s scientized and digitalized world, as less desirable or powerful. Deena Skolnick Weisberg, a Temple University postdoctoral fellow in psychology, wrote a 2008 paper titled “The Seductive Allure of Neuroscience Explanations,” in which she argued that the language of neuroscience affected nonexperts’ judgment, impressing them so much that they became convinced that illogical explanations actually made sense. Similarly, combining neuroscience with, say, the study of art nowadays can seem to offer an instant sheen of credibility.

But neurohumanities is also the result of something else. Neuroscience appears to be filling a vacuum where a single dominant mode of thought and criticism once existed. That plinth has been held in the American academy by critical theory, neo-Marxism and psychoanalysis. Alva Noë, a University of California, Berkeley, philosopher who might be called a “neuro doubter,” sees neurohumanities as a reaction to the previous postmodern moment. “The pre-eminence of neuroscience” has legitimated an “anti-theory stance” within the humanities, says Noë, the author of Out of Our Heads.

Noë argues that neurohumanities is the ultimate response to—and rejection of—critical theory, a mixture of literary theory, linguistics and anthropology that dominated the American humanities through the 1990s. Critical theory’s current decline was somewhat inevitable, as all intellectual movements erode over time. This was exemplified by the so-called Sokal affair in 1996, in which a physics professor named Alan Sokal submitted a hoax theoretical paper on science to Social Text, only to unmask himself and lambaste the theorists who accepted and published his piece as not understanding the science. Another clear public repudiation was the harsh Times obituary in 2004 of the philosopher Jacques Derrida, who was dubbed an “abstruse theorist”—in the obit’s headline, no less. But as critical theory’s power—along with that of Marxism and Freudianism—fades within the humanities, neurohumanities and literary Darwinism are stepping up, ready to explain how we live, love art and read a novel (or rather, how the cortex absorbs text). And while much was gained as “the brain” replaced “individual psychology” or social class readings, much has also been lost.

Critical theory offered us the fantasy that we have no control, making a fetish of haze and ambiguity and exhibiting what Noë terms “an allergy to anything essentialist.” In neurohumanities, by contrast, we do have mastery and concrete, empirical ends, which has proved more appealing, even as (or perhaps because) it is highly reductive. At least since George H.W. Bush declared the 1990s the decade of the brain, the media have been flooded with simplistic empirical answers to many of life’s questions. Neuroscience is now the favored method for explaining almost every element of human behavior. President Obama recently proposed an initiative called Brain Research Through Advancing Innovative Neurotechnologies, or BRAIN, to be modeled on the Human Genome Project. The aim is to create the first full model of brain circuitry and function. Scientists are hoping that BRAIN will be as successful (and as well funded) as the Human Genome Project turned out to be.


* * *

There are things that neuroscience is useful for, in terms of understanding behavior, but there are also things it is not all that useful for, like understanding the nuances of our reactions to poetry.

Literary studies before the advent of the neurohumanities tended to rest on murkier categories than science likes—categories such as subjectivity and interpretation. In a novel like Mansfield Park, for instance, the heroine Fanny Price is cornered into servility by both her social class and her feminine role. The judgmental Fanny smiles upon conservative social formations and condemns most others. This has led to vastly different interpretations. Lionel Trilling famously wrote that the novel’s “praise is not for social freedom, but for social stasis,” but it has also been read as feminist: a “bitter parody of conservative fiction,” in the words of Princeton University Austen scholar Claudia Johnson.

That is not to say that all neurohumanities scholars are insensitive to nuance and ambiguity. Some, like Lisa Zunshine, combine neuroscience with original interpretations of consciousness and multiple points of view in modernist novels. But other neuroaestheticians offer blunt accounts of areas of study that have long been appreciated for their complexity, such as the meaning of art or aesthetics as a means of transmitting politics and interpretation. In other words, some underlying principles of neuroscience are useful when applied to the humanities, but it needs to understand its limits.

Neuroaesthetics, an au courant mix of art history and cognitive science, asks whether our brains are structured so that paintings and precious objects move us in one way or another: one neuroimaging study, conducted at University College, London, set out to explain how we experience beauty in visual art. Ten people were shown 300 paintings while their heads were in an fMRI machine. They were asked to label the paintings as neutral, beautiful or ugly. The paintings they thought were beautiful led to increased activity in their frontal cortex, while the ugly paintings led to a similar increase in their motor cortex.

Professor Semir Zeki at UCL was responsible for this study, which he conducted through the Institute of Neuroesthetics in London and UC Berkeley. The center sets out a bold claim on its website: “the artist is, in a sense, a neuroscientist exploring the potentials and capacities of the brain, though with different tools.” Zeki’s latest paper? “The Neural Sources of Salvador Dali’s Ambiguity.”


* * *

But are multiple—and politically minded—meanings possible in the land of the neuro novel or neuro-aesthetics? Is neurohumanities, like “neuromarketing,” simply trying to help us understand and then produce cultural artifacts that will have the best effect for readers, writers and artists—political and historical context be damned?

The response to this question depends on whom you ask. Some are suspicious of what has been called “neuro-reductionism.” Jonathan Kramnick, a Rutgers University English professor who wrote a provocative essay, “Against Literary Darwinism,” for the journal Critical Inquiry, notes the rise of books with titles like The Art Instinct. “There’s an attention to the fine grain of a text that neuroscience can’t get at,” he says.

“Humanists are unwilling or unable to evaluate the science, so we just take scientists’ word for it, without following up on the evidence or knowing these claims are highly contested within their community,” says Todd Cronan, a professor of art history at Emory University. “‘Mirror neurons’ are highly debatable, yet art historians now just apply them to artworks. I think it’s worrying. And when there’s a ‘call for collaboration’ between art scholars and neuroscientists, we just marshal the scientists’ evidence.”

Jennifer Ashton, an English professor at the University of Illinois, wrote a takedown of neuroaesthetics in the academic journal Nonsite in 2011. She put it like this: “How your brain is firing won’t tell you if something is ironic, metaphorical or meaningful or if it is not.”

What does it mean, Cronan wonders, “if Matisse uses a lot of red and a neuro person says, ‘Red produced neuronal firing’?”

Literary Darwinism, another route by which the language and analytical frame of science has entered the humanities, can have an even more formulaic aspect. In one study, Jonathan Gottschall, an evolutionary lit scholar, compared 1,440 folktales from nearly fifty cultures in order to counter feminist critics and the assumption “that European tales reflect and perpetuate the arbitrary gender norms of western patriarchal societies.” His finding: there are biosocial norms that all cultures perpetuate—i.e., the feminists are wrong.

Critics also point out that neurohumanities scholars prefer formally conservative artists. Such artists are more likely to help them make general points about beauty or the act of reading: Austen or Michelangelo, for instance, were both animated by classical values like order and symmetry. And while neuroimaging may help us understand what our mind does when we read quickly or with a more careful attention, these data sets tell us next to nothing about the actual literature, nor do they give us a political understanding of a text.

It’s not hard to imagine a future when neurohumanities and neuroaesthetics have become so adulated that they rise up and out of the academy. Soon enough, they may seep into writers’ colonies and artists’ studios, where “culture producers” confronting a sagging economy and a distracted audience will embrace “Neuro Art” as their new selling point. Will writers start creating characters and plots designed to trigger the “right” neuronal responses in their readers and finally sell 20,000 copies rather than 3,000? Will artists, and advertisers who use artists, employ the lessons of neuroaestheticism to sharpen their neuromarketing techniques? After all, Robert T. Knight, a professor of psychology and neuroscience at Berkeley, is already the science adviser for NeuroFocus, a neuromarketing company that follows the engagement and attention of potential shoppers. When neuroaesthetics is fully put to use in these ways, it may do as Alva Noë said: “reduce people and culture to ends, simply to be manipulated or made marketable.”

And he has a point. Today, there’s the sudden dominance of so many ways to quantify things that used to be amorphous and that we imagined were merely expressive or personal: Big Data, Facebook, ubiquitous surveillance, the growing use of pharmaceuticals to control our moods and minds. In other words, neurohumanities is not just a change in how we see paintings or read nineteenth-century novels. It’s a small part of the change in what we think it means to be human.

Philosopher Thomas Nagel’s broadside against Darwinism and materialism is mostly an instrument of mischief, wrote Brian Leiter and Michael Weisberg in “Do You Only Have a Brain?” (Oct. 22, 2012).

Friday, April 26, 2013

World Thinkers 2013 - Intellectuals Who Shape Our Thinking

This is clearly a British list, or at least the people who assembled it seem to be - it was commissioned and published by Prospect Magazine, from the UK. The placing of Richard Dawkins as the top thinker is a bit more than slightly disconcerting. Dawkins is a close-minded reductionist ideologue, and these are not traits I  find useful in a public intellectual.

The absence of E.O. Wilson (or Noam Chomsky) from the list was explained by The Guardian as due to his lack of influence in the last twelve months:
To qualify for this year's world thinkers rankings, it was not enough to have written a seminal book, inspired an intellectual movement or won a Nobel prize several years ago (hence the absence from the 65-strong long list of ageing titans such as Noam Chomsky or Edward O Wilson); the selectors' remit ruthlessly insisted on "influence over the past 12 months" and "significance to the year's biggest questions".
It's also a little disappointing to see Steven Pinker at #3 and Slavoj Žižek at #6. However, it is heartening to see Daniel Kahneman at #10. I would have expected to see Thomas Nagel (Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False) on this list somewhere - his latest book created a lot of serious discussion about the ability of science to provide answers in every realm of our lives.

World Thinkers 2013

by Prospect / APRIL 24, 2013

The results of Prospect’s world thinkers poll

Left to right: Ashraf Ghani, Richard Dawkins, Steven Pinker 
After more than 10,000 votes from over 100 countries, the results of Prospect’s world thinkers 2013 poll are in. Online polls often throw up curious results, but this top 10 offers a snapshot of the intellectual trends that dominate our age.

THE WINNERS

1. Richard Dawkins
When Richard Dawkins, the Oxford evolutionary biologist, coined the term “meme” in The Selfish Gene 37 years ago, he can’t have anticipated its current popularity as a word to describe internet fads. But this is only one of the ways in which he thrives as an intellectual in the internet age. He is also prolific on Twitter, with more than half a million followers—and his success in this poll attests to his popularity online. He uses this platform to attack his old foe, religion, and to promote science and rationalism. Uncompromising as his message may be, he’s not averse to poking fun at himself: in March he made a guest appearance on The Simpsons, lending his voice to a demon version of himself.

2. Ashraf Ghani
Few academics get the chance to put their ideas into practice. But after decades of research into building states at Columbia, Berkeley and Johns Hopkins, followed by a stint at the World Bank, Ashraf Ghani returned to his native Afghanistan to do just that. He served as the country’s finance minister and advised the UN on the transfer of power to the Afghans. He is now in charge of the Afghan Transition Coordination Commission and the Institute for State Effectiveness, applying his experience in Afghanistan elsewhere. He is already looking beyond the current crisis in Syria, raising important questions about what kind of state it will eventually become.

3. Steven Pinker
Long admired for his work on language and cognition, the latest book by the Harvard professor Steven Pinker, The Better Angels of Our Nature, was a panoramic sweep through history. Marshalling a huge range of evidence, Pinker argued that humanity has become less violent over time. As with Pinker’s previous books, it sparked fierce debate. Whether writing about evolutionary psychology, linguistics or history, what unites Pinker’s work is a fascination with human nature and an enthusiasm for sharing new discoveries in accessible, elegant prose.

4. Ali Allawi
Ali Allawi began his career in 1971 at the World Bank before moving into academia and finally politics, as Iraq’s minister of trade, finance and defence after the fall of Saddam Hussein. Since then he has written a pair of acclaimed books, most recently The Crisis of Islamic Civilisation, and he is currently a senior visiting fellow at Princeton. “His scholarly work on post-Saddam Iraq went further than anyone else has yet done in helping us understand the complex reality of that country,” says Clare Lockhart, co-author (with Ashraf Ghani) of Fixing Failed States. “His continuing work on the Iraqi economy—and that of the broader region—is meanwhile helping to illuminate its potential, as well as pathways to a more stable and productive future.”

5. Paul Krugman
As a fierce critic of the economic policies of the right, Paul Krugman has become something like the global opposition to fiscal austerity. A tireless advocate of Keynesian economics, he has been repeatedly attacked for his insistence that government spending is critical to ending the recession. But as he told Prospect last year, “we’ve just conducted what amounts to a massive experiment on pretty much the entire OECD [the industrialised world]. It’s been as slam-dunk a victory for a more or less Keynesian view as one can possibly imagine.” His New York Times columns are so widely discussed that it is easy to overlook his academic work, which has won him a Nobel prize and made him one of the world’s most cited economists.

6. Slavoj Žižek
Slavoj Žižek’s critics seem unsure whether to dismiss him as a buffoon or a villain. The New Republic has called him “the most despicable philosopher in the west,” but the Slovenian’s legion of fans continues to grow. He has been giving them plenty to chew on—in the past year alone he has produced a 1,200-page study of Hegel, a book, The Year of Dreaming Dangerously, analysing the Arab Spring and other recent events, and a documentary called The Pervert’s Guide to Ideology. And he has done all this while occupying academic posts at universities in Slovenia, Switzerland and London. His trademark pop culture references (“If you ask me for really dangerous ideological films, I’d say Kung Fu Panda,” he told one interviewer in 2008) may have lost their novelty, but they remain a gentle entry point to his studies of Lacanian psychoanalysis and left-wing ideology.

7. Amartya Sen
Amartya Sen will turn 80 in November—making him the fourth oldest thinker on our list—but he remains one of the world’s most active public intellectuals. He rose to prominence in the early 1980s with his studies of famine. Since then he has gone on to make major contributions to developmental economics, social choice theory and political philosophy. Receiving the Nobel prize for economics in 1998, he was praised for having “restored an ethical dimension to the discussion of vital economic problems.” The author of Prospect’s first cover story in 1995, Sen continues to write influential essays and columns, in the past year arguing against European austerity. And he shows no sign of slowing down or narrowing his focus—his latest book (with Jean Drèze), An Uncertain Glory: India and its Contradictions, will be published in July.

8. Peter Higgs
The English physicist Peter Higgs lent his name to the Higgs boson, the subatomic particle discovered last year at Cern that gives mass to other elementary particles. Although Higgs is always quick to point out that others were involved in early work on the existence of the particle, he was central to the first descriptions of the boson in 1964. “Of the various people who contributed to that piece of theory,” Higgs told Prospect in 2011, “I was the only one who pointed to this particle as something that would be… of interest for experimentalists.” Higgs is expected to receive a Nobel prize this year for his achievements.

9. Mohamed ElBaradei
The former director general of the UN’s international atomic energy agency and winner of the 2005 Nobel peace prize, Mohamed ElBaradei has become one of the most prominent advocates of democracy in Egyptian politics over the past two years. Since December, ElBaradei has been the coordinator of the National Salvation Front, a coalition of political parties dedicated to opposing what they see as President Mohamed Morsi’s attempts to secure power for himself and impose a new constitution favouring Islamist parties. Reflecting widespread concern about Morsi’s actions, ElBaradei has accused the president of appointing himself “Egypt’s new pharaoh.”

10. Daniel Kahneman
Since the publication of Thinking, Fast and Slow in 2011, Daniel Kahneman has become an unlikely resident at the top of the bestseller lists. His face has even appeared on posters on the London Underground, with only two words of explanation: “Thinking Kahneman.” Although he is a psychologist by training, his work on our capacity for making irrational decisions helped create the field of behavioural economics, and he was awarded the Nobel prize for economics in 2002. His book has now brought these insights to a wider audience, making them more influential than ever.

Biographies by Daniel Cohen, Jay Elwes and David Wolf. Additional research by Luke Neima and Lucy Webster

RANKINGS 11 TO 65

11. Steven Weinberg, physicist
12. Jared Diamond, anthropolgist
13. Oliver Sacks, psychologist
14. Ai Weiwei, artist
15. Arundhati Roy, writer
16. Nate Silver, statistician
17. Asgar Farhadi, filmmaker
18. Ha-Joon Chang, economist
19. Martha Nussbaum, philosopher
20. Elon Musk, businessman
21. Michael Sandel, philosopher (see below)
22. Niall Ferguson, historian
23. Hans Rosling, statistician
24 = Anne Applebaum, journalist
24 = Craig Venter, biologist
26. Shinya Yamanaka, biologist
27. Jonathan Haidt, psychologist
28. George Soros, philanthropist
29. Francis Fukuyama, political scientist
30. James Robinson and Daron Acemoglu, political scientist and economist
31. Mario Draghi, economist
32. Ramachandra Guha, historian
33. Hilary Mantel, novelist
34. Sebastian Thrun, computer scientist
35. Zadie Smith, novelist
36 = Hernando de Soto, economist
36 = Raghuram Rajan, economist
38. James Hansen, climate scientist
39. Christine Lagarde, economist
40. Roberto Unger, philosopher
41. Moisés Naím, political scientist
42. David Grossman, novelist
43. Andrew Solomon, writer
44. Esther Duflo, economist
45. Eric Schmidt, businessman
46. Wang Hui, political scientist
47. Fernando Savater, philosopher
48. Alexei Navalny, activist
49. Katherine Boo, journalist
50. Anne-Marie Slaughter, political scientist
51. Paul Collier, development economist
52. Margaret Chan, health policy expert
53. Sheryl Sandberg, businesswoman
54. Chen Guangcheng, activist
55. Robert Shiller, economist
56 = Ivan Krastev, political scientist
56 = Nicholas Stern, economist
58. Theda Skocpol, sociologist
59 = Carmen Reinhart, economist
59 = Ngozi Okonjo-Iweala, economist
61. Jeremy Grantham, investment strategist
62. Thomas Piketty and Emmanuel Saez, economists
63. Jessica Tuchman Mathews, political scientist
64. Robert Silvers, editor
65. Jean Pisani-Ferry, economist

ANALYSIS

Only three thinkers from our 2005 top 10, Richard Dawkins, Paul Krugman and Amartya Sen, appear in this year’s top spots. The panelists who drew up the longlist of 65 gave credit for the currency of candidates’ work—their influence over the past 12 months and their continuing significance for this year’s biggest questions.

Among the new entries at the top are Peter Higgs—whose inclusion is a sign of public excitement about the discoveries emerging from the world’s largest particle physics laboratory, Cern—and Slavoj Žižek, whose critique of global capitalism has gained more urgency in the wake of the financial crisis. The appearance of Steven Pinker and Daniel Kahneman, authors of two of the most successful recent “ideas books,” further demonstrates the public appetite for serious, in-depth thinking in the age of the TED talk. The inclusion of Ashraf Ghani, Ali Allawi and Mohamed ElBaradei—from Afghanistan, Iraq and Egypt, respectively—reflects the importance of their work on fostering democracies across the Muslim world in the wake of foreign interventions and the Arab Spring.

One new development was the influence of social media, with just over half of voters coming to the world thinkers homepage via Twitter or Facebook. Twitter also gave readers a chance to respond to the list and highlight notable omissions—Stephen Hawking and Noam Chomsky were popular choices.

As always, the absences are as revealing as the familiar names at the top. The failure of environmental thinkers to win many votes may be a sign of the faltering energy of the green movement. Despite the presence of climate scientists lower down the list, the movement seems to lack successors to influential public intellectuals such as Rachel Carson and James Lovelock. Serious thinkers about the internet and technology are also conspicuous by their absence. The highest-placed representative of Silicon Valley is the entrepreneur Elon Musk, but beyond journalist-critics such as Evgeny Morozov and Nicholas Carr, technology still awaits its heavyweight public intellectuals (see Thomas Meaney, £).

Most striking of all is the lack of women at the top of this year’s list. The highest-placed woman in this year’s poll, at number 15, is Arundhati Roy, who has become a prominent left-wing critic of inequalities and injustice in modern India since the publication of her novel The God of Small Things over a decade ago.

Many thanks to all those who voted. Do let us know what you make of the results.

David Wolf

MORE ON THE WORLD THINKERS OF 2013: