Thursday, April 17, 2014

Noam Chomsky | Talks at Google

 

Professor Noam Chomsky visited Google Cambridge to answer a series of questions submitted by Google employees. Among Professor Chomsky's more recent books are On Anarchism (2013), Power Systems: Conversations on Global Democratic Uprisings and the New Challenges to U.S. Empire (interviews, 2013), and Occupy: Reflections on Class War, Rebellion and Solidarity (Occupied Media Pamphlet Series) (2013).

Noam Chomsky | Talks at Google

Published on Apr 8, 2014


Professor Noam Chomsky visits Google Cambridge to answer the following questions from Googlers:

1. Your early view of the potential abuse of the Internet as a political medium seemed to convey a wait and see attitude. How has your view evolved and where do you think the balance of power is headed? 2:43

2. What is the most interesting insight the science of Linguistics has revealed but that the public at large seems not to know about or appreciate? 13:00

3. In "Hopes and Prospects" you mention your colleague Kenneth Hale and his work with Native Americans. In your opinion, how important is the problem of language extinction? That is, how important is it - for humanity to preserve the current level of linguistic diversity? 18:03

4. Can you comment on the contribution of research in statistical natural language processing to linguistics? 30:00

5. What, in your opinion, are the most effective strategies for building a more just and peaceful world? And in your view, what are the most significant takeaways from Occupy, the Arab Spring, and the Ukrainian "Euromaidan" uprising? 35:11

6. In "Hopes and Prospects" you compare Obama with Bush2. It's 4 years later now. What would you say today? 41:39

Gábor Máté MD - Attachment = Wholeness and Health or Disease, ADD, Addiction, Violence

 

In depth interview with Gábor Máté MD by Michael Mendizza. The topic is attachment (the parent-child bond type of attachment) and the two basic outcomes:
  • Secure = Wholeness and Health
  • Insecure = ADD, addiction, disease, mental illness
Here is a brief outline of the basic attachment patterns from Wikipedia:
Attachment patterns

Much of attachment theory was informed by Mary Ainsworth's innovative methodology and observational studies, particularly those undertaken in Scotland and Uganda. Ainsworth's work expanded the theory's concepts and enabled empirical testing of its tenets.[4] Using John Bowlby's early formulation, she conducted observational research on infant-parent pairs (or dyads) during the child's first year, combining extensive home visits with the study of behaviours in particular situations. This early research was published in 1967 in a book entitled Infancy in Uganda.[4] Ainsworth identified three attachment styles, or patterns, that a child may have with attachment figures: secure, anxious-avoidant (insecure) and anxious-ambivalent or resistant (insecure). She devised a procedure known as the Strange Situation Protocol as the laboratory portion of her larger study, to assess separation and reunion behaviour.[37] This is a standardised research tool used to assess attachment patterns in infants and toddlers. By creating stresses designed to activate attachment behaviour, the procedure reveals how very young children use their caregiver as a source of security.[8] Carer and child are placed in an unfamiliar playroom while a researcher records specific behaviours, observing through a one-way mirror. In eight different episodes, the child experiences separation from/reunion with the carer and the presence of an unfamiliar stranger.[37]

Ainsworth's work in the United States attracted many scholars into the field, inspiring research and challenging the dominance of behaviourism.[38] Further research by Mary Main and colleagues at the University of California, Berkeley identified a fourth attachment pattern, called disorganized/disoriented attachment (insecure). The name reflects these children's lack of a coherent coping strategy.[39] The type of attachment developed by infants depends on the quality of care they have received.[40]
Anyway, this is a nice discussion.

,

Wednesday, April 16, 2014

Jeremy Rifkin: "The Zero Marginal Cost Society" | Authors at Google


Jeremy Rifkin's new book is The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism (2014). He is also the author of The Third Industrial Revolution: How Lateral Power Is Transforming Energy, the Economy, and the World (2011) and The Empathic Civilization: The Race to Global Consciousness in a World in Crisis (2009).

Rifkin stopped by Google recently to discuss his new book.

Jeremy Rifkin: "The Zero Marginal Cost Society" | Authors at Google

Published on Apr 15, 2014


In The Zero Marginal Cost Society, New York Times bestselling author Jeremy Rifkin describes how the emerging Internet of Things is speeding us to an era of nearly free goods and services, precipitating the meteoric rise of a global Collaborative Commons and the eclipse of capitalism.

Rifkin uncovers a paradox at the heart of capitalism that has propelled it to greatness but is now taking it to its death—the inherent entrepreneurial dynamism of competitive markets that drives productivity up and marginal costs down, enabling businesses to reduce the price of their goods and services in order to win over consumers and market share. (Marginal cost is the cost of producing additional units of a good or service, if fixed costs are not counted.) While economists have always welcomed a reduction in marginal cost, they never anticipated the possibility of a technological revolution that might bring marginal costs to near zero, making goods and services priceless, nearly free, and abundant, and no longer subject to market forces.

Now, a formidable new technology infrastructure—the Internet of things (IoT)—is emerging with the potential of pushing large segments of economic life to near zero marginal cost in the years ahead. Rifkin describes how the Communication Internet is converging with a nascent Energy Internet and Logistics Internet to create a new technology platform that connects everything and everyone. Billions of sensors are being attached to natural resources, production lines, the electricity grid, logistics networks, recycling flows, and implanted in homes, offices, stores, vehicles, and even human beings, feeding Big Data into an IoT global neural network. Prosumers can connect to the network and use Big Data, analytics, and algorithms to accelerate efficiency, dramatically increase productivity, and lower the marginal cost of producing and sharing a wide range of products and services to near zero, just like they now do with information goods.

Rifkin concludes that capitalism will remain with us, albeit in an increasingly streamlined role, primarily as an aggregator of network services and solutions, allowing it to flourish as a powerful niche player in the coming era. We are, however, says Rifkin, entering a world beyond markets where we are learning how to live together in an increasingly interdependent global Collaborative Commons.

About the Author: Jeremy Rifkin is the bestselling author of twenty books on the impact of scientific and technological changes on the economy, the workforce, society, and the environment. He has been an advisor to the European Union for the past decade.

Mr. Rifkin also served as an adviser to President Nicolas Sarkozy of France, Chancellor Angela Merkel of Germany, Prime Minister Jose Socrates of Portugal, Prime Minister Jose Luis Rodriguez Zapatero of Spain, and Prime Minister Janez Janša of Slovenia, during their respective European Council Presidencies, on issues related to the economy, climate change, and energy security.

Mr. Rifkin is a senior lecturer at the Wharton School's Executive Education Program at the University of Pennsylvania where he instructs CEOs and senior management on transitioning their business operations into sustainable Third Industrial Revolution economies.

Mr. Rifkin holds a degree in economics from the Wharton School of the University of Pennsylvania, and a degree in international affairs from the Fletcher School of Law and Diplomacy at Tufts University.

This Authors@Google talk was hosted by Boris Debic.

Samuel Beckett’s Only Movie, "Film," Starring Buster Keaton


Samuel Beckett is one of my favorite authors in both the fiction and drama genres. I did not until now know that he had also made a film. It is called, simply enough, Film.

Thanks to Open Culture for unearthing this gem of inscrutability.

Watch Film, Samuel Beckett’s Only Movie, Starring Buster Keaton


April 15th, 2014


Fresh off the international success of his play Waiting For Godot, Samuel Beckett made a film, called aptly enough Film. It came out in 1965 and proved to be the only motion picture the soon-to-be Nobel Prize winner would ever make. As you might expect, it is enigmatic, bleakly funny and very, very odd. You can check it out above.

The 17-minute silent short is essentially a chase movie between the camera and the main character O – as in object. Film opens with O cowering from the gaze of a couple he passes on the street. Meanwhile, the camera looms just behind his head. At his stark, typically Beckettesque flat, O covers the mirror, throws his cat and his chihuahua outside and even trashes a picture — the only piece of decoration in the flat — that seems to be staring back at him. Yet try as he might, O ultimately can’t quite evade being observed by the gaze of the camera.

Barney Rosset, editor of Grove Press, commissioned the movie and regular Beckett collaborator Alan Schneider was tapped to direct. As Schneider recalled, the first draft of the screenplay was unorthodox.
The script appeared in the spring of 1963 as a fairly baffling when not downright inscrutable six-page outline. Along with pages of addenda in Sam’s inimitable informal style: explanatory notes, a philosophical supplement, modest production suggestions, a series of hand-drawn diagrams.
It took almost a year of discussion to bring the movie’s themes and story into focus.

For the lead character Beckett wanted to hire Charlie Chaplin until he was informed by an officious secretary that Chaplin doesn’t read scripts. Beckett then suggested Buster Keaton. The playwright was a longtime fan of the silent film legend. Keaton was even offered the role of Lucky on the original American production of Godot, though the actor declined. This time around, though, Keaton signed on, even if he couldn’t make heads or tales of the script.

And he wasn’t the only one. Ever since it came out, critics have been puzzling what Film is really about. Is it a statement on voyeurism in cinema? On human consciousness? On death? Beckett gave his take on the movie to the New Yorker: “It’s a movie about the perceiving eye, about the perceived and the perceiver — two aspects of the same man. The perceiver desires like mad to perceive and the perceived tries desperately to hide. Then, in the end, one wins.”

Keaton himself defined the movie even more succinctly, “A man may keep away from everybody but he can’t get away from himself.”

Related Content:

~ Jonathan Crow is a Los Angeles-based writer and filmmaker whose work has appeared in Yahoo!, The Hollywood Reporter, and other publications. You can follow him at @jonccrow.

Mysteries of the Human Brain Revealed as Scientists Release Detailed 3D Image of its Genes and Pathways

Below is a research update from Allen Institute for Brain Science in Seattle (funded by Microsoft billionaire Paul Allen) on some of their work in President Obama's BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies). This article comes from The Independent (UK).

Mysteries of the human brain revealed as scientists release detailed 3D image of its genes and pathways



Steve Connor - Science Editor
Wednesday 02 April 2014

Scientists have generated the first detailed pictures of the intricate events in the womb that result in the formation of the human brain. The study could prove to be a decisive breakthrough in understanding the many cognitive disorders thought to be triggered before birth – from autism to schizophrenia.

The researchers believe that the findings could one day lead to a “blueprint for building the human brain” based on knowing the precise sequence of genes that are selectively switched on and off in different parts of the embryonic organ during the critical stages of development in the womb.

Researchers at the Allen Institute for Brain Science in Seattle, funded by the Microsoft billionaire Paul Allen, analysed the brains of four human foetuses between 15 and 21 weeks to build up the first atlas of the developing brain based on differences in gene activities – a so-called “transcriptome”.

The work is part of a much wider body of research aimed at a fundamental understanding of the brain, which is often described as the most complex structure in the known universe. Last month, President Obama announced the doubling of US Government funding on his brain initiative – from $100 million to $200 million.

Other approaches in the Obama initiative include the construction of intricate wiring diagrams of how the 100 billion nerve cells of the brain communicate with one another by sending electrical signals down physical connections, known as “connectomes”.

Senior scientists believe that these revolutionary new techniques for studying the brain could transform our knowledge of how the brain works and so lead to radical new forms of prevention or treatment for the many psychological and developmental disorders that have so far defied medicine.

“This is the beginning. We want to understand the blueprint whereby we build a brain and this is one step in that direction, where we can begin to have a map of how genes are driving the process,” said Ed Lein of the Allen Institute, who led the study published in the journal Nature.

“Knowing where a gene is expressed in the brain can provide powerful clues about what its role is. This gives a comprehensive view of which genes are on and off in which specific nuclei and cell types while the brain is developing during pregnancy,” Dr Lein said.

“This means we have a blueprint for human development, an understanding of the crucial pieces necessary for the brain to form in a normal, healthy way, and a powerful way to investigate what goes wrong in disease,” he said.

The study married neuroscience with the knowledge of the human genome to produce highly detailed snapshots of gene activity at critical points in development. Each transcriptome changes over time as the brain develops, revealing the varying gene activity and nerve connections that are formed and re-formed as the human brain grows and develops in the womb between 15 and 21 weeks.

“This is the time when we’re beginning to establish the neocortex of the brain, which is responsible for many of our most distinctive cognitive features,” Dr Lein said.

Autism could be the first developmental disorder to benefit from this kind of approach to building up images of the developing brain – known as the BrainSpan Atlas. The research has already led to one potential insight into the childhood disorder, he said.

“We used the maps we created to find a hub of genetic action that could be linked to autism, and we found one. These genes were associated with the newly generated excitatory neurons in the cortex, the area of the brain that is responsible for many of the cognitive features affected in autism such as social behaviour,” Dr Lein said.

“The discovery is an exciting example of the ability of the BrainSpan Atlas to generate meaningful hypotheses about the origin of brain developmental disorders,” he added.

By comparing the genomes of humans with other mammals, such as the mouse and chimpanzee, the scientists also hope to tease out the developmental differences that make the human brain so unique in terms of its complexity and ability to generate higher thought processes, such as consciousness.

“We know that some important regions of the genome show striking [DNA] sequence differences in humans compared to other species. Since where a gene is expressed in the brain can give insight into its function, we can use our map to begin to figure out the roles of those genes in making humans distinct,” Dr Lein said.

The research is already beginning to show that these genes are enriched in the human frontal cortex, which is the part of the brain that is said to be responsible for conscious control over other parts of the brain. It could eventually lead to unprecedented clues about the molecular underpinnings of what makes the human neocortex – and humanity – so unique, he said.

Tuesday, April 15, 2014

Researchers Discover Impaired Neuronal Activity in Occasional Users of Stimulant Drugs

 

By stimulant drugs they mean cocaine, amphetamines (given to our children as Adderall), or similar drugs (which the study classifies as Adderall and maybe Ritalin).
Subjects were all 18-24 year old college students. Occasional users were characterized as having taken stimulants an average of 12 to 15 times. The “stimulant naïve” control group included students who had never taken stimulants.
___

The brain images of the occasional users showed consistent patterns of diminished neuronal activity in the parts of the brain associated with anticipatory functioning and updating anticipation based on past trials.
I have been Adderall since I went back to school a few years back. In my lost youth I quite liked stimulants of almost any variety. I can't imagine what all of this has done to my brain.

There are three pieces here - a summary from Constance Scharff at Psychology Today, the original press release as posted for Drug Discovery and Development Magazine, and the citation and abstract from the Journal of Neuroscience.

Cocaine and Stimulants Literally Damage Your Brain

A hard wiring of the brain may be the reason some people become addicts.

Published on April 14, 2014 by Constance Scharff, Ph.D. in Ending Addiction for Good

Scientists using functional magnetic resonance imaging (fMRI) have discovered impaired neuronal activity in occasional users of stimulant drugs. An internal hard wiring of the brain may be the reason some people become prone to drug addiction. The implication of the study is the possibility of using brain pattern activity to identify people who are at risk before they become addicted or when they exhibit addictive type behavior.

At the University of California, San Diego School of Medicine researchers recorded brain activity via fMRI. The participants were given instructions to press a button when they saw a specific image, but not if they heard a sound in 288 trails measuring reaction times and errors on the assigned task. The study compared the brains of young adults between the ages of 18 and 25 years old that occasionally used cocaine, amphetamines or similar drugs to those of a “stimulant naïve” control group.The reaction times of occasional users were actually faster on the first part of the trials, implying more impulsiveness compared to the control group. On the next part, the same group made more mistakes, which became worse as the tasks became more difficult. The difference in the two groups of people was notable.
“We used to think that drug addicts just did not hold themselves back but this work suggests that the root of this is an impaired ability to anticipate a situation and to detect trends in when they need to stop,” said Katia Harlé, a postdoctoral researcher and the study’s lead author.
Next, the researchers want to discover whether this impaired neuronal activity is permanent or can be re-wired.
It may be possible to “exercise” weak areas of the brain, where attenuated neuronal activity is associated with higher tendency to addiction.
This would offer hope of early intervention thus preventing the damage that stimulants can cause to brain function and the measurable difference of physical reaction performance.

Not using any stimulant drug is the best choice to make as even casual or occasional users risk the possibility of addiction later on. Make the right choice and just don’t start! But, if you have recently used cocaine or stimulants, consider seeking professional advice if you need help quitting. Our brains are not something to mess with!
Here is the original press release about this study, as it was posted in Drug Discovery and Development Magazine. Below that is the original abstract and citation from the Journal of Neuroscience.

Stimulant Drugs Can Impair Neuronal Activity

Date: March 25, 2014
Source: UC San Diego School of Medicine


Researchers at the University of California, San Diego School of Medicine have discovered impaired neuronal activity in the parts of the brain associated with anticipatory functioning among occasional 18- to 24-year-old users of stimulant drugs, such as cocaine, amphetamines and prescription drugs such as Adderall.

The brain differences, detected using functional magnetic resonance imaging (fMRI), are believed to represent an internal hard wiring that may make some people more prone to drug addiction later in life.

Among the study’s main implications is the possibility of being able to use brain activity patterns as a means of identifying at-risk youth long before they have any obvious outward signs of addictive behaviors.

The study is published in the Journal of Neuroscience.

“If you show me 100 college students and tell me which ones have taken stimulants a dozen times, I can tell you those students’ brains are different,” said Martin Paulus, professor of psychiatry and a co-senior author with Angela Yu, PhD, professor of cognitive science at UC San Diego. “Our study is telling us, it’s not ‘this is your brain on drugs,’ it’s ‘this is the brain that does drugs.’”

In the study, 18- to 24-year-old college students were shown either an X or an O on a screen and instructed to press, as quickly as possible, a left button if an X appeared or a right button if an O appeared. If a tone was heard, they were instructed not to press a button. Each participant’s reaction times and errors were measured for 288 trials, while their brain activity was recorded via fMRI.

Occasional users were characterized as having taken stimulants an average of 12 to 15 times. The “stimulant naïve” control group included students who had never taken stimulants. Both groups were screened for factors, such as alcohol dependency and mental health disorders, which might have confounded the study’s results.

The outcomes from the trials showed that occasional users have slightly faster reaction times, suggesting a tendency toward impulsivity. The most striking difference, however, occurred during the “stop” trials. Here, the occasional users made more mistakes, and their performance worsened, relative to the control group, as the task became harder (i.e., when the tone occurred later in the trial).

The brain images of the occasional users showed consistent patterns of diminished neuronal activity in the parts of the brain associated with anticipatory functioning and updating anticipation based on past trials.

“We used to think that drug addicts just did not hold themselves back but this work suggests that the root of this is an impaired ability to anticipate a situation and to detect trends in when they need to stop,” said Katia Harlé, a postdoctoral researcher in the Paulus laboratory and the study’s lead author.

The next step will be to examine the degree to which these brain activity patterns are permanent or can be re-calibrated. The researchers said it may be possible to “exercise” weak areas of the brain, where attenuated neuronal activity is associated with higher tendency to addiction.

“Right now there are no treatments for stimulant addiction and the relapse rate is upward of 50%,” Paulus said. “Early intervention is our best option.”
* * * *

Full Citation:
Harlé, KM, Shenoy, P, Stewart, JL, Tapert, SF, Yu, AJ, and Paulus, MP. (2014, Mar 26). Altered Neural Processing of the Need to Stop in Young Adults at Risk for Stimulant Dependence. Journal of Neuroscience; 34(13): 4567-4580; doi: 10.1523/JNEUROSCI.2297-13.2014

Altered Neural Processing of the Need to Stop in Young Adults at Risk for Stimulant Dependence


Katia M. Harlé, Pradeep Shenoy, Jennifer L. Stewart, Susan F. Tapert, Angela J. Yu, and
Martin P. Paulus,

Author contributions: M.P.P. designed research; K.M.H. and P.S. analyzed data; K.M.H., P.S., J.L.S., S.F.T., A.J.Y., and M.P.P. wrote the paper.


Abstract


Identification of neurocognitive predictors of substance dependence is an important step in developing approaches to prevent addiction. Given evidence of inhibitory control deficits in substance abusers (Monterosso et al., 2005; Fu et al., 2008; Lawrence et al., 2009; Tabibnia et al., 2011), we examined neural processing characteristics in human occasional stimulant users (OSU), a population at risk for dependence. A total of 158 nondependent OSU and 47 stimulant-naive control subjects (CS) were recruited and completed a stop signal task while undergoing functional magnetic resonance imaging (fMRI). A Bayesian ideal observer model was used to predict probabilistic expectations of inhibitory demand, P(stop), on a trial-to-trial basis, based on experienced trial history. Compared with CS, OSU showed attenuated neural activation related to P(stop) magnitude in several areas, including left prefrontal cortex and left caudate. OSU also showed reduced neural activation in the dorsal anterior cingulate cortex (dACC) and right insula in response to an unsigned Bayesian prediction error representing the discrepancy between stimulus outcome and the predicted probability of a stop trial. These results indicate that, despite minimal overt behavioral manifestations, OSU use fewer brain processing resources to predict and update the need for response inhibition, processes that are critical for adjusting and optimizing behavioral performance, which may provide a biomarker for the development of substance dependence.

Chris Mooney - This Fish Crawled Out of the Water…and Into Creationists' Nightmares

I love it when reality interferes with crazy. This cool article and podcast comes from Mother Jones. The book that serves as the foundation for this piece is Neil Shubin's Your Inner Fish: A Journey Into the 3.5-Billion-Year History of the Human Body.

This Fish Crawled Out of the Water…and Into Creationists' Nightmares

Some 375 million years ago, Tiktaalik emerged onto land. Today, explains paleontologist Neil Shubin, we're all walking around in modified fish bodies.


—By Chris Mooney | Fri Apr. 11, 2014

 
Tiktaalik roseae, a transitional fossil showing the link between fish...and us. Courtesy of Tangled Bank Studios, LLC.

We all know the Darwin fish, the car-bumper send-up of the Christian "ichthys" symbol, or Jesus fish. Unlike the Christian symbol, the Darwin fish has, you know, legs. Har har.

But the Darwin fish isn't merely a clever joke; in effect, it contains a testable scientific prediction. If evolution is true, and if life on Earth originated in water, then there must have once been fish species possessing primitive limbs, which enabled them to spend some part of their lives on land. And these species, in turn, must be the ancestors of four-limbed, land-living vertebrates like us.

Sure enough, in 2004, scientists found one of those transitional species: Tiktaalik roseae, a 375 million-year-old Devonian period specimen discovered in the Canadian Arctic by paleontologist Neil Shubin and his colleagues. Tiktaalik, explains Shubin on the latest episode of the Inquiring Minds podcast [included below], is an "anatomical mix between fish and a land-living animal."

"It has a neck," says Shubin, a professor at the University of Chicago. "No fish has a neck. And you know what? When you look inside the fin, and you take off those fin rays, you find an upper arm bone, a forearm, and a wrist." Tiktaalik, Shubin has observed, was a fish capable of doing a push-up. It had both lungs and gills. In sum, it's quite the transitional form.

Shubin's bestselling book about his discovery, Your Inner Fish: A Journey Into the 3.5-Billion-Year History of the Human Body, uses the example of Tiktaalik and other evolutionary evidence to trace how our own bodies share similar structures not only with close relatives like chimpanzees or orangutans, but indeed, with far more distant relatives like fish. Think of it as an extensive unpacking of a famous line by Charles Darwin from his book, The Descent of Man: "Man still bears in his bodily frame the indelible stamp of his lowly origin."

Neil Shubin with Tiktaalik. Courtesy of Tangled Bank Studios, LLC.

And now, PBS has adapted Your Inner Fish as a three-part series (you can watch the first installment here), using the irrepressible Shubin as a narrator who romps from Pennsylvania roadsides to the melting Arctic in search of fossils that elucidate the natural history of our own anatomy.

"Many of the muscles and nerves and bones I'm using to talk to you with right now, and many of the muscles and nerves and bones you're using to hear me with right now, correspond to gill structures in fish," explained Shubin on Inquiring Minds. Indeed, despite having diverged from fish several hundred million of years ago, we still share more than half of our DNA with them, according to Shubin.

"The genetic toolkit that builds their fins is very similar to the genetic toolkit that builds our limbs," he says. "And much of the evolution, we think, from fins to limbs, didn't involve a whole lot of new genes."

Now, of course, none of this sits well with the Young-Earth creationist crowd, who are continually trying to undermine science education and US science literacy. What do creationists say about Shubin's research, and especially Tiktaalik? Turns out that creationist leader Ken Ham of Answers in Genesis has his answer ready to go: "There are no transitional forms that support evolution," he confidently declares in a minute-long audio track dedicated to debunking the Tiktaalik finding. Why? Because "the Bible says God made fish and land animals during the same week, not millions of years apart." That's just the beginning of the attempted takedowns that creationists have leveled against Shubin's work.

Creationists snipe, raise doubt, and deny almost everything that we know, but the reason that Tiktaalik is such a momentous find appears to be beyond them: Evolutionary theory (complemented by an extensive knowledge of geology) predicted not only that this fish would have existed, but also, that its fossilized remains would probably be found within a specific part of the world, in geological layers of a particular age. Hence, Shubin's many trips with his team to the Canadian Arctic, where those rock layers could be found. "We designed this expedition with the goal of finding this exact fossil," explains Shubin. "And we used the tools of evolution and geology as discovery tools to make a prediction about where to look. And the prediction was confirmed." Thus, Tiktaalik isn't just proof of evolution; it's also proof that the scientific process works.

Shubin and his team working in the landscape where Tiktaalik was discovered. Courtesy of Tangled Bank Studios, LLC.

Nevertheless, following the announcement of Tiktaalik's discovery in 2006, the creationists pounced. "My inbox is filled with some interesting emails," says Shubin. Over time, as the idea for Your Inner Fish began to gel, Shubin decided to seek out creationists, or less-than-evolution-friendly audiences, in person to try to explain the fossil and what it means. "I decided at that point, I'm going to go give talks in Alabama, in South Carolina, in Oklahoma, in Texas, and elsewhere, where I'll bring Tiktaalik with me, or the cast of Tiktaalik," says Shubin. "And I've done this every year."

Having the fossil to show, says Shubin, changes the entire nature of the discussion. "It's about the data, it's about the evidence, it's about the discovery," he says. "It's about, 'How do you date those rocks, how do you compare that creature to another creature?' Well, if we do that, we kind of win, because what it means is it changes the conversation in a way where it's now about evidence," he continues. "You're not going to change everybody's mind, but you're going to affect a few, most definitely. And that's kind of my passion. That's what I think I can bring to the table."


And as if Tiktaalik doesn't push enough science denial buttons, it turns out that the story of its discovery is also, simultaneously, a story about climate change. The fossil was, after all, unearthed in the Arctic, the part of the world that is undergoing the most rapid climate change. Shubin has been working in this landscape, visiting every summer or every other summer, for years now, making him a firsthand witness to the abrupt transition. "I just feel, my Arctic has changed," Shubin says. "Climate has sort of forced this other set of changes as well, and it's hard to be there and not feel it," he adds, noting a much greater military presence and also a corporate presence from companies in shipping and extractive industries.

But there's a great irony: When it comes to the melting Arctic, what's good for petroleum geologists is also pretty good for paleontologists, Shubin admits. "You look at the aerial photos, that were taken in 1959, and you can compare them to the aerial photos today, well, there's more rock to look at," he says. Glacial retreats may be a global disaster, but there's no denying they're a scientific opportunity.

All of which means: Those who deny climate change, and through their denial, help to worsen it...well, at least they're giving us more evidence for evolution.

To listen to the full podcast with Neil Shubin, you can stream below:

This episode of Inquiring Minds, a podcast hosted by neuroscientist and musician Indre Viskontas and best-selling author Chris Mooney, also features a discussion of the growing possibility of an El Nino developing later this year, and the bizarre viral myth about animals fleeing Yellowstone Park because of an impending supervolcano eruption.

To catch future shows right when they are released, subscribe to Inquiring Minds via iTunes or RSS. We are also available on Stitcher and on Swell. You can follow the show on Twitter at @inquiringshow and like us on Facebook. Inquiring Minds was also recently singled out as one of the "Best of 2013" on iTunes—you can learn more here.

Laurence Heller, PhD - The NeuroAffective Relational Model™

 

As I finish reading Healing Developmental Trauma: How Early Trauma Affects Self-Regulation, Self-Image, and the Capacity for Relationship, by Laurence Heller, PhD, I find myself trying to find a way to do a more in-depth training in the The NeuroAffective Relational Model™ for working with developmental trauma, which is something I see in about 75% (or more) of my clients.

Here are some resources online - a nearly 2-hour video introduction to the model, a written introduction from his personal page, and then an interview he did with Dr. David Nuys on Shrink Rap Radio last December.

Enjoy!

Here is a video talk by Dr, Laurence Heller giving an introduction to the NeuroAffective Relational Model™ of healing development trauma:


Here is the introduction to NARM from Dr. Heller's website:

The NeuroAffective Relational Model™
[NARM]
Mindful Self-Regulation in Clinical Practice


In recent years the role of self-regulation has become an important part of psychological thinking.
The NeuroAffective Relational Model™ (NARM) brings the current understanding of self-regulation into clinical practice. This resource-oriented, non-regressive model emphasizes helping clients establish connection to the parts of self that are organized, coherent and functional. It helps bring into awareness and organization the parts of self that are disorganized and dysfunctional without making the regressed, dysfunctional elements the primary theme of the therapy.

Core Principles
The NeuroAffective Relational Model™ focuses on the fundamental tasks and functional unity of biological and psychological development. The NARM model:

• Integrates both a nervous system based and a relational orientation.
• Brings developmentally-informed clinical interventions that use somatic mindfulness and an orientation to resources to anchor self-regulation in the nervous system.
• Works clinically with the link between psychological issues and the body by helping access the body’s self-regulatory capacities and by supporting nervous system re-regulation.
• Uses mindful inquiry into the deeper identifications and counter-identifications that we take to be our identity.
In the NARM approach, we work simultaneously with the physiology and the psychology of individuals who have experienced developmental trauma, and focus on the interplay between issues of identity and the capacity for connection and regulation.

NARM uses four primary organizing principles:

• Supporting connection and organization
• Exploring identity
• Working in present time
• Regulating the nervous system
Five Organizing Developmental Themes
There are five developmental life themes and associated core resources that are essential to our capacity for self-regulation and affect our ability to be present to self and others in the here-and-now:

• Connection. We feel that we belong in the world. We are in touch with our body and our emotions and capable of consistent connection with others.
• Attunement. Our ability to know what we need and to recognize, reach out for, and take in the abundance that life offers.
• Trust. We have an inherent trust in ourselves and others. We feel safe enough to allow a healthy interdependence with others.
• Autonomy. We are able to say no and set limits with others. We speak our mind without guilt or fear.
• Love-Sexuality. Our heart is open and we are able to integrate a loving relationship with a vital sexuality.
To the degree that these five basic needs are met, we experience regulation and connection. We feel safe and trusting of our environment, fluid and connected to ourselves and others. We experience a sense of regulation and expansion. To the degree that these basic needs are not met, we develop survival styles to try to manage the disconnection and dysregulation. 


A Fundamental Shift
Whereas much of psychodynamic psychotherapy has been oriented toward identifying pathology and focusing on problems, NARM is a model for therapy and growth that emphasizes working with strengths as well as with symptoms. It orients towards resources, both internal and external, in order to support the development of an increased capacity for self regulation.

At the heart of what may seem like a wide range of physical and emotional symptoms, most psychological and many of physiological problems can be traced to a disturbance in one or more of the five organizing developmental themes related to the survival styles.

Initially, survival styles are adaptive, representing success, not pathology. However, because the brain uses the past to predict the future, these survival patterns remain fixed in our nervous system and create an adaptive but false identity. It is the persistence of survival styles appropriate to the past that distorts present experience and creates symptoms. These survival patterns, having outlived their usefulness, create ongoing disconnection from our authentic self and from others.

In NARM the focus is less on why a person is the way they are and more on how their survival style distorts what they are experiencing in the present moment. Understanding how patterns began can be helpful to the client but is primarily useful to the degree that these patterns have become survival styles that influence present experience.

The Metaprocess
Each therapeutic tradition has an implicit metaprocess. The metaprocess teaches clients to pay attention to certain elements of their experience and to ignore others. When therapies focus on deficiency, pain, and dysfunction, clients become skilled at orienting toward deficiency, pain, and dysfunction. Focusing on the difficulties of the past does not sufficiently reduce dysfunction nor support self-regulation.

The metaprocess for the NARM model is the mindful awareness of self in the present moment. The client is invited into a fundamental process of inquiry:

“What are the patterns that are preventing me from being present to myself and others at this moment and in my life?”
We explore this question on the following levels of experience: cognitive, emotional, felt sense, and physiological. NARM explores personal history to the degree that patterns from the past interfere with being present and in contact with self and others in the here-and-now. It brings an active process of inquiry to clients’ relational and survival styles, building on their strengths and helping them to experience a sense of agency in the difficulties of their current life.

The NARM metaprocess involves two aspects of mindfulness:

• Somatic mindfulness
• Mindful awareness of the organizing principles of one’s adaptive survival styles
Using a dual awareness that is anchored in the present moment, a person becomes mindful of cognitive, emotional, and physiological patterns that began in the past while not falling into the trap of making the past more important than the present. Working with the NARM approach progressively reinforces the connection to self in the present moment. Tracking the process of connection/disconnection, regulation/dysregulation in present time helps clients connect with their sense of agency and feel less like victims of their childhood.

Using resource-oriented techniques that work with subtle shifts in the nervous system adds significant effectiveness. Working with the nervous system is fundamental in disrupting the predictive tendencies of the brain. It is connection to our body and to other people that brings healing re-regulation. Using techniques that support increased connection with self and others is instrumental in supporting effective re-regulation.

Bottom-Up and Top-Down
There are continual loops of information going in both directions from the body to the brain and from the brain to the body. There are similar loops within the lower and higher structures of the brain, that is between the brain stem, limbic system, and cortex.
NARM uses both top-down and bottom-up approaches. Top-down approaches emphasize cognitions and emotions as the primary focus. Bottom-up approaches, on the other hand, focus on the body, the felt sense and the instinctive responses as they are mediated through the brain stem toward higher levels of brain organization. Using both bottom-up and top-down orientations greatly expands therapeutic options.

Working with the Life Force
The spontaneous movement in all of us is toward connection and health. No matter how withdrawn and isolating we have become, or how serious the trauma we have experienced, on the deepest level, just as a plant spontaneously moves towards the sun, there is in each of us an impulse moving toward connection. This organismic impulse is the fuel of The NeuroAffective Relational Model™.

~ Copyright 2009-2012, Laurence Heller
Dr. Heller was a guest on Dr. David Van Nuys' Shrink Rap Radio back in December of 2013 - I am pretty sure I posted that talk here already, but here is the link again, as well as a link to the transcript if you'd rather read the interview.

Monday, April 14, 2014

Hereditary Trauma: Inheritance of Traumas and How They May Be Mediated


We, as therapists, see the results of intergenerational trauma (also known as transgenerational trauma) in our offices all of the time, especially with incest and/or neglect. The work of the interpersonal neurobiologists demonstrates that many of the cognitive beliefs and affect dysregulation in mental illness result from faulty or absent attachment bonding.

[Aside: It's interesting to me that the Native American population uses the term "intergenerational trauma" to discuss what has happened to them, but the Jewish community uses the term "transgenerational trauma" to discuss what their people have endured.] 

This new work offers insight into the epigenetic aspect of intergenerational traumas, although their bias seems to be toward a purely biological etiology, which is only a partial understanding of the issue. It's still useful information.

Journal Reference:
Gapp K, Jawaid A, Sarkies P, Bohacek J, Pelczar P, Prados J, Farinelli L, Miska E, Mansuy IM. Implication of sperm RNAs in transgenerational inheritance of the effects of early trauma in mice. Nature Neuroscience, April 13, 2014 DOI: 10.1038/nn.3695

Hereditary trauma: Inheritance of traumas and how they may be mediated

April 13, 2014
Source: ETH Zurich

Summary:
Extreme and traumatic events can change a person -- and often, years later, even affect their children. Researchers have now unmasked a piece in the puzzle of how the inheritance of traumas may be mediated. The phenomenon has long been known in psychology: traumatic experiences can induce behavioural disorders that are passed down from one generation to the next. It is only recently that scientists have begun to understand the physiological processes underlying hereditary trauma

The consequences of traumatic experiences can be passed on from one generation to the next. Credit: Image by Isabelle Mansuy / UZH / Copyright ETH Zurich

Extreme and traumatic events can change a person -- and often, years later, even affect their children. Researchers of the University of Zurich and ETH Zurich have now unmasked a piece in the puzzle of how the inheritance of traumas may be mediated.

The phenomenon has long been known in psychology: traumatic experiences can induce behavioural disorders that are passed down from one generation to the next. It is only recently that scientists have begun to understand the physiological processes underlying hereditary trauma. "There are diseases such as bipolar disorder, that run in families but can't be traced back to a particular gene," explains Isabelle Mansuy, professor at ETH Zurich and the University of Zurich. With her research group at the Brain Research Institute of the University of Zurich, she has been studying the molecular processes involved in non-genetic inheritance of behavioural symptoms induced by traumatic experiences in early life.

Mansuy and her team have succeeded in identifying a key component of these processes: short RNA molecules. These RNAs are synthetized from genetic information (DNA) by enzymes that read specific sections of the DNA (genes) and use them as template to produce corresponding RNAs. Other enzymes then trim these RNAs into mature forms. Cells naturally contain a large number of different short RNA molecules called microRNAs. They have regulatory functions, such as controlling how many copies of a particular protein are made.

Small RNAs with a huge impact

The researchers studied the number and kind of microRNAs expressed by adult mice exposed to traumatic conditions in early life and compared them with non-traumatized mice. They discovered that traumatic stress alters the amount of several microRNAs in the blood, brain and sperm -- while some microRNAs were produced in excess, others were lower than in the corresponding tissues or cells of control animals. These alterations resulted in misregulation of cellular processes normally controlled by these microRNAs.

After traumatic experiences, the mice behaved markedly differently: they partly lost their natural aversion to open spaces and bright light and had depressive-like behaviours. These behavioural symptoms were also transferred to the next generation via sperm, even though the offspring were not exposed to any traumatic stress themselves.

Even passed on to the third generation

The metabolism of the offspring of stressed mice was also impaired: their insulin and blood-sugar levels were lower than in the offspring of non-traumatized parents. "We were able to demonstrate for the first time that traumatic experiences affect metabolism in the long-term and that these changes are hereditary," says Mansuy. The effects on metabolism and behaviour even persisted in the third generation.

"With the imbalance in microRNAs in sperm, we have discovered a key factor through which trauma can be passed on," explains Mansuy. However, certain questions remain open, such as how the dysregulation in short RNAs comes about. "Most likely, it is part of a chain of events that begins with the body producing too much stress hormones."

Importantly, acquired traits other than those induced by trauma could also be inherited through similar mechanisms, the researcher suspects. "The environment leaves traces on the brain, on organs and also on gametes. Through gametes, these traces can be passed to the next generation."

Mansuy and her team are currently studying the role of short RNAs in trauma inheritance in humans. As they were also able to demonstrate the microRNAs imbalance in the blood of traumatized mice and their offspring, the scientists hope that their results may be useful to develop a blood test for diagnostics.


Story Source:
The above story is based on materials provided by ETH Zurich. Note: Materials may be edited for content and length.

Sacred Psychiatry in Ancient Greece - Georgios Tzeferakos & Douzenis Athanasios


This open access article from the Annals of General Psychiatry offers an interesting glimpse into the role of the psychological healer in ancient Greece, and to surprise, the role that shamanism played in their healing model. Cool stuff.

Full Citation:
Tzeferakos, G, and Athanasios, D. (2014, Apr 12). 'Sacred psychiatry in ancient Greece. Annals of General Psychiatry; 13:11. doi:10.1186/1744-859X-13-11

'Sacred psychiatry in ancient Greece'

Georgios Tzeferakos and Douzenis Athanasios
Author Affiliations
Published: 12 April 2014


Abstract (provisional)


From the ancient times, there are three basic approaches for the interpretation of the different psychic phenomena: the organic, the psychological, and the sacred approach. The sacred approach forms the primordial foundation for any psychopathological development, innate to the prelogical human mind. Until the second millennium B.C., the Great Mother ruled the Universe and shamans cured the different mental disorders. But, around 1500 B.C., the predominance of the Hellenic civilization over the Pelasgic brought great changes in the theological and psychopathological fields. The Hellenes eliminated the cult of the Great Mother and worshiped Dias, a male deity, the father of gods and humans. With the Father's help and divinatory powers, the warrior-hero made diagnoses and found the right therapies for mental illness; in this way, sacerdotal psychiatry was born.

The complete article is available as a provisional PDF. The fully formatted PDF and HTML versions are in production. 

Introduction


Three basic trends in psychiatric thought can be traced back to earliest times: (a) organic approach, the attempt to explain diseases of the mind in physical terms; (b) psychological approach, the attempt to find a psychological explanation for mental disturbances; and (c) sacred or magical approach, which can be further divided into the animistic, mythological and demonological models [1]. The origin of the word ‘magic’ leads us back to the Persian religion. The prophet Zoroaster (sixth century B.C.) helped man in his struggle against evil. Aiding Zoroaster in his proselytization of the right road were the priests known as Mah (pronounced Mag), which meant ‘the greatest ones’. In subsequent years, the great Magi lost their high reputation and became known as charlatans and tricksters, hence, the connotation to the word ‘magic’ [2].
 

The magical sacred approach forms the primordial foundation for any psychopathological development because it reflects a modality of interpretation of reality that is innate to the prelogical human mind. The animistic model is based on prelogical, emotional reasoning, originating from certain historical conditions. Primitive man lived in deep communion with nature and perceived all phenomena to be connected by mysterious forces. Chance does not exist, and everything that happens has a precise meaning because the world is inhabited by animated entities that support every single event. Different feelings and emotions, psychosensorial disturbances, and delusions are the work of obscure and ineffable forces that people the world of nature and can act on a man's mind and soul [3].
 

Greek thought in the middle of the second millennium B.C. transformed the animistic conception into a naturalistic, anthropomorphic theology, in which indistinct and fluid forces were materialized in myths. Every symptom was thought to be caused by a certain deity, which could, if implored, benevolently cure it. The human passions, the emotional suffering from endopsychic conflicts, and the different psychiatric symptoms were projected and concentrated in a divine symbol. The myth was a form of knowledge that took place by symbolizing in concrete divine shapes the phenomena of nature and the complex life of the soul and mind. The ‘anthropomorphism’ of the Greek mythology, where even gods have feelings and emotions, is a historical breakthrough [4].
 

The genesis and the evolvement of the more elaborate mythological comprehension of the man and the universe are in a direct relationship with the historical dynamic process of an increasing complexity of the social structure. During the pre-Homeric era, in the Minoan and the Mycenaean societies, citizens were divided according to their financial status, to their position in the administrative hierarchy, and to their relationship with the kings. The king in these primitive societies was the sole judicial, legislative, and administrative authority. Another important criterion for the social position and integration in the different groups and ‘fratrias’, in the pre-Homeric and Homeric societies, was the genealogical lineage. Noble families claiming to have descended from mythological heroes gained social power and prestige.
 

The extensive colonization of the Mediterranean coast by the Greeks led to the emergence of new social groups. The merchants gained wealth and power and also became the bearers of new cultural, scientific, and political ideas taken from the neighboring nations. Gradually and through turbulent strife, the mainstay of the social structure in the ancient Greek world, the city-state (‘polis’) became the cradle of democracy. In the classical era, although the old noble families still held much of their power, the new wealthy aristocracy and the middle social clashes gained access to legislative, administrative, and judicial institutions. This social ‘democratization’ allowed and supported the great scientific and cultural changes that took place in this historical period.
 

Shamanism


Primitive man cured his minor troubles through various intuitive, empirical techniques. The first attempts to explain illness were equally intuitive: sometimes simple phenomena having a cause-and-effect relationship were easily understood (overeating and drinking may cause discomfort and thus purgatives will cure it); but that was not always the case. When the causes of an ailment were not obvious, primitive man ascribed them to the malignant influences of either other humans or divine beings and dealt with the former by magic or sorcery and the latter through magico-religious practices.
 

In these primitive societies, the typical witch, doctor, or shaman was a person capable of transcending into an ecstatic state, with the help of aromatic herbs, alcohol, seeds, and music. While being in ecstasy, he was able to communicate with the pathogenic spirits, drive them away, and thus cure the patient. In hunting societies, shamans acquire their healing powers from animal spirits [5]. A shaman was able to communicate with the beasts, travel through time and space, sink deep into the world of the spirits, and change his psychic and physical form.
 

When the Greeks colonized the Black Sea, during the seventh century B.C., they came in contact with shamanic rituals and beliefs. A key figure of shamanism was Pythagoras (sixth century B.C.). He was, in today's terms, a mathematician, astronomer, psychologist, psychiatrist, physician, musicologist, mystic, and philosopher. He gathered excessive wisdom by living through 10 or 20 human generations [6], and he believed in reincarnation (‘metempsychosis’). Through an indefinite cycle of psychic reincarnations, one could achieve immortality, a privilege seized only by the gods. Pythagoras could be considered the ‘father’ of Psychology since, as Porphyrios says, ‘He was the first one to define with precision the anthropocentric science, which teaches us the nature of an individual’ [7]. He was the founder of the encephalocentric doctrine which considered the brain as the seat of human consciousness, sensation, and knowledge and claimed that the psychic organ has a tripartite division, closely resembling the structural theory of Freud: (1) reason, which was the innate category of truth, (2) intelligence, which carried out the synthesis of sensory sensations, and (3) impulse, which derived from the soma. The rational part had its seat in the brain and the irrational one in the heart. Pythagoras considered the mental life to be a harmony supported by the relationship between antithetical forms: love-hate, good-bad, etc. Life itself was regulated according to this theory by opposite rhythmic movements, e.g., sleep-wakefulness, and mental symptoms originated from a disequilibrium of this basic harmony. The work of Pythagoras and Empedocles, originator of the cosmogenic theory of the four classical elements (fire, earth, air, and water), formed the basis for the humoral theory of Hippocrates [8]. Pythagoras stressed the value of group psychotherapy, medical herbs (opium for anxiety, cauliflower and scilla against depression, anis against epilepsy), and music for the treatment of emotionally ill patients [9]. On the other hand, the Pythagoreans avoided cauterizations and incisions [10]. According to Edebstein [11], the ‘Hippocratic oath’ is of Pythagorean origin because some of its main principles are the rejection of assisted suicide and abortion, the prohibition of surgical procedures, and public disclosure of medical cases. Hippocrates and his followers performed surgeries, administered drugs for abortion, and publicly discussed case reports, with direct reference to the patients' names [12].
 

The psychic immortality was a common belief between the Pythagoreans and the Orphic religious cult, which, according to Herodotus, originated from the Egyptian religion [13]. Fundamental feature of Orphism was the psychic ‘catharsis’ or cleansing, from somatic impulses and passions, through shamanic-like rituals, music, strict dietary practices, and exorcisms [14]. Psychically depressed patients were stimulated with Phrygian music, while the excited ones were sedated with the Doric tonalities. Orphic mysteries mainly practiced in Thrace descended to the Hellenic world from Northern Europe and Siberia [15].
 

Between legend and history: Melampus and Asclepius


Ancient Greek medicine was a complex practice perceived as something between myth and reality, as an expression of a magical divinatory, hieratic, and empirical technical practice. Examples of such interrelationship are the myths of Melampus, a priest-psychiatrist who allegedly lived in Argos 200 years before the Trojan War [16], and Asclepius who was considered to be the ‘god of medicine’ by the ancient Greeks [17].

Until the second millennium B.C., a female deity governed the Universe, according to the archaic Pelasgic religion. She was the Great Mother Earth, named Cybele or Selene or Hecate, who dominated man and predated other deities. She gave birth to all things, fertilized not by any male opposite but by symbolic seeds in the form of the wind, beans, or insects [18]. The Great Mother regulated the sexual and affective life and if angered could unleash malevolent influences that brought about zoopathic psychoses. The place of origin of her following is uncertain, but it is thought that she had popular followings in Thrace. Her most important sanctuary was Lagina, a theocratic city-state in which the goddess was served by eunuchs, called Dactyls [19].
 

The predominance of the Hellenic civilization, in the second millennium B.C., over the Pelasgic, modified the psychopathological interpretation. This fundamental change almost led to a civil war, with lots of killings, especially in the Delphi temple. The Hellenes eliminated the cult of Hecate and worshiped Dias, a male deity, the father of gods and humans. The warrior cult of the hero replaced that of the Mother earth. It was the hero who set himself up as a physician and priest against the forces of evil. With the Father's help and divinatory powers, he made diagnoses and found the right therapies for mental illness; in this way, sacerdotal psychiatry was born around 1500 B.C. [1].

Read the whole article by downloading the PDF from the link above.

A.I. Has Grown Up and Left Home

http://static.nautil.us/2065_0ae3f79a30234b6c45a6f7d298ba1310.png

As my regular readers well know, I don't think we will ever have human-like robots who can interact with us as though they are not machines. This article from Nautilus presents recent advances in what is known as subsymbolic approaches to AI, "Trying to get computers to behave intelligently without worrying about whether the code actually “represents” thinking at all."

A.I. Has Grown Up and Left Home

It matters only that we think, not how we think.

By David Auerbach Illustration by Olimpia Zagnoli December 19, 2013

"The history of Artificial Intelligence,” said my computer science professor on the first day of class, “is a history of failure.” This harsh judgment summed up 50 years of trying to get computers to think. Sure, they could crunch numbers a billion times faster in 2000 than they could in 1950, but computer science pioneer and genius Alan Turing had predicted in 1950 that machines would be thinking by 2000: Capable of human levels of creativity, problem solving, personality, and adaptive behavior. Maybe they wouldn’t be conscious (that question is for the philosophers), but they would have personalities and motivations, like Robbie the Robot or HAL 9000. Not only did we miss the deadline, but we don’t even seem to be close. And this is a double failure, because it also means that we don’t understand what thinking really is.

Our approach to thinking, from the early days of the computer era, focused on the question of how to represent the knowledge about which thoughts are thought, and the rules that operate on that knowledge. So when advances in technology made artificial intelligence a viable field in the 1940s and 1950s, researchers turned to formal symbolic processes. After all, it seemed easy to represent “There’s a cat on the mat” in terms of symbols and logic:
ai_formula
Literally translated, this reads as “there exists variable x and variable y such that x is a cat, y is a mat, and x is sitting on y.” Which is no doubt part of the puzzle. But does this get us close to understanding what it is to think that there is a cat sitting on the mat? The answer has turned out be “no,” in part because of those constants in the equation. “Cat,” “mat,” and “sitting” aren’t as simple as they seem. Stripping them of their relationship to real-world objects, and all of the complexity that entails, dooms the project of making anything resembling a human thought.

This lack of context was also the Achilles heel of the final attempted moonshot of symbolic artificial intelligence. The Cyc Project was a decades-long effort, begun in 1984, that attempted to create a general-purpose “expert system” that understood everything about the world. A team of researchers under the direction of Douglas Lenat set about manually coding a comprehensive store of general knowledge. What it boiled down to was the formal representation of millions of rules, such as “Cats have four legs” and “Richard Nixon was the 37th President of the United States.” Using formal logic, the Cyc (from “encyclopedia”) knowledge base could then draw inferences. For example, it could conclude that the author of Ulysses was less than 8 feet tall:

(implies
(writtenBy Ulysses-Book ? SPEAKER)
(equals ?SPEAKER JamesJoyce))
(isa JamesJoyce IrishCitizen)
(isa JamesJoyce Human)
(implies
(isa ?SOMEONE Human)
(maximumHeightInFeet ?SOMEONE 8)

Unfortunately, not all facts are so clear-cut. Take the statement “Cats have four legs.” Some cats have three legs, and perhaps there is some mutant cat with five legs out there. (And Cat Stevens only has two legs.) So Cyc needed a more complicated rule, like “Most cats have four legs, but some cats can have fewer due to injuries, and it’s not out of the realm of possibility that a cat could have more than four legs.” Specifying both rules and their exceptions led to a snowballing programming burden.

After more than 25 years, Cyc now contains 5 million assertions. Lenat has said that 100 million would be required before Cyc would be able to reason like a human does. No significant applications of its knowledge base currently exist, but in a sign of the times, the project in recent years has begun developing a “Terrorist Knowledge Base.” Lenat announced in 2003 that Cyc had “predicted” the anthrax mail attacks six months before they had occurred. This feat is less impressive when you consider the other predictions Cyc had made, including the possibility that Al Qaeda might bomb the Hoover Dam using trained dolphins.

Cyc, and the formal symbolic logic on which it rested, implicitly make a crucial and troublesome assumption about thinking. By gathering together in a single virtual “space” all of the information and relationships relevant to a particular thought, the symbolic approach pursues what Daniel Dennett has called a “Cartesian theater”—a kind of home for consciousness and thinking. It is in this theater that the various strands necessary for a thought are gathered, combined, and transformed in the right kinds of ways, whatever those may be. In Dennett’s words, the theater is necessary to the “view that there is a crucial finish line or boundary somewhere in the brain, marking a place where the order of arrival equals the order of ‘presentation’ in experience because what happens there is what you are conscious of.” The theater, he goes on to say, is a remnant of a mind-body dualism which most modern philosophers have sworn off, but which subtly persists in our thinking about consciousness.

The impetus to believe in something like the Cartesian theater is clear. We humans, more or less, behave like unified rational agents, with a linear style of thinking. And so, since we think of ourselves as unified, we tend to reduce ourselves not to a single body but to a single thinker, some “ghost in the machine” that animates and controls our biological body. It doesn’t have to be in the head—the Greeks put the spirit (thymos) in the chest and the breath—but it remains a single, indivisible entity, our soul living in the house of the senses and memory. Therefore, if we can be boiled to an indivisible entity, surely that entity must be contained or located somewhere.
AI_Descartes_BREAKER.
Philosophy of mind: René Descartes’ illustration of dualism.Wikimedia Commons
This has prompted much research looking for “the area” where thought happens. Descartes hypothesized that our immortal soul interacted with our animal brain through the pineal gland. Today, studies of brain-damaged patients (as Oliver Sacks has chronicled in his books) have shown how functioning is corrupted by damage to different parts of the brain. We know facts like, language processing occurs in Broca’s area in the frontal lobe of the left hemisphere. But some patients with their Broca’s area destroyed can still understand language, due to the immense neuroplasticity of the brain. And language, in turn, is just a part of what we call “thinking.” If we can’t even pin down where the brain processes language, we are a far way from locating that mysterious entity, “consciousness.” That may be because it doesn’t exist in a spot you can point at.

Symbolic artificial intelligence, the Cartesian theater, and the shadows of mind-body dualism plagued the early decades of research into consciousness and thinking. But eventually researchers began to throw the yoke off. Around 1960, linguistics pioneer Noam Chomsky made a bold argument: Forget about meaning, forget about thinking, just focus on syntax. He claimed that linguistic syntax could be represented formally, was a computational problem, and was universal to all humans and hard-coded into every baby’s head. The process of exposure to language caused certain switches to be flipped on or off to determine what particular form the grammar would take (English, Chinese, Inuit, and so on). But the process was one of selection, not acquisition. The rules of grammar, however they were implemented, became the target of research programs around the world, supplanting a search for “the home of thought.”

Chomsky made progress by abandoning the attempt to directly explain meaning and thought. But he remained firmly inside the Cartesian camp. His theories were symbolic in nature, postulating relationships among a variety of potential vocabularies rooted in native rational faculties, and never making any predictions that proved true without exception. Modern artificial intelligence programs have gone one step further, by giving up on the idea of any form of knowledge representation. These so-called subsymbolic approaches, which also go under such names as connectionism, neural networks, and parallel distributed processing take a unique approach. Rather than going from the inside out—injecting symbolic “thoughts” into computer code and praying that the program will exhibit sufficiently human-like thinking—subsymbolic approaches proceed from the outside in: Trying to get computers to behave intelligently without worrying about whether the code actually “represents” thinking at all.

Subsymbolic approaches were pioneered in the late 1950s and 1960s, but lay fallow for years because they initially seemed to generate worse results than symbolic approaches. In 1957, Frank Rosenblatt pioneered what he called the “perceptron,” which used a re-entrant feedback algorithm in order to “train” itself to compute various logical functions correctly, and thereby “learn” in the loosest sense of the term. This approach was also called “connectionism” and gave rise to the term “neural networks,” though a perceptron is vastly simpler than an actual neuron. Rosenblatt was drawing on oddball cybernetic pioneers like Norbert Wiener, Warren McCulloch, Ross Ashby, and Grey Walter, who theorized and even experimented with homeostatic machines that sought equilibrium with their environment, such as Grey Walter’s light-seeking robotic “turtles” and Claude Shannon’s maze-running “rats.”

In 1969, Rosenblatt was met with a scathing attack by symbolic artificial intelligence advocate Marvin Minsky. The attack was so successful that subsymbolic approaches were more or less abandoned during the 1970s, a time which has been called the AI Winter. As symbolic approaches continued to flail in the 1970s and 1980s, people like Terrence Sejnowski and David Rumelhart returned to subsymbolic artificial intelligence, modeling it after learning in biological systems. They studied how simple organisms relate to their environment, and how the evolution of these organisms gradually built up increasingly complex behavior. Biology, genetics, and neuropsychology are what figured here, rather than logic and ontology.

This approach more or less abandons knowledge as a starting point. In contrast to Chomsky, a subsymbolic approach to grammar would say that grammar is determined and conditioned by environmental and organismic constraints (what psychologist Joshua Hartshorne calls “design constraints”), not by a set of hardcoded computational rules in the brain. These constraints aren’t expressed in strictly formal terms. Rather, they are looser contextual demands such as, “There must be a way for an organism to refer to itself” and “There must be a way to express a change in the environment.”

By abandoning the search for a Cartesian theater, containing a library of symbols and rules, researchers made the leap from instilling machines with data, to instilling them with knowledge. The essential truth behind subsymbolism is that language and behavior exist in relation to an environment, not in a vacuum, and they gain meaning from their usage in that environment. To use language is to use it for some purpose. To behave is to behave for some end. In this view, any attempt to generate a universal set of rules will always be riddled with exceptions, because contexts are constantly shifting. Without the drive toward concrete environmental goals, representation of knowledge in a computer is meaningless, and fruitless. It remains locked in the realm of data.


For certain classes of problems, modern subsymbolic approaches have proved far more generalizable and ubiquitous than any previous symbolic approach to the same problems. This success speaks to the advantage of not worrying about whether a computer “knows” or “understands” the problem it is working on. For example, genetic approaches represent algorithms with varying parameters as chromosomal “strings,” and “breed” successful algorithms with one another. These approaches do not improve through better understanding of the problem. All that matters is the fitness of the algorithm with respect to its environment—in other words, how the algorithm behaves. This black-box approach has yielded successful applications in everything from bioinformatics to economics, yet one can never give a concise explanation of just why the fittest algorithm is the most fit.

Neural networks are another successful subsymbolic technology, and are used for image, facial, and voice recognition applications. No representation of concepts is hardcoded into them, and the factors that they use to identify a particular subclass of images emerge from the operation of the algorithm itself. They can also be surprising: Pornographic images, for instance, are frequently identified not by the presence of particular body parts or structural features, but by the dominance of certain colors in the images.

These networks are usually “primed” with test data, so that they can refine their recognition skills on carefully selected samples. Humans are often involved in assembling this test data, in which case the learning environment is called “supervised learning.” But even the requirement for training is being left behind. Influenced by theories arguing that parts of the brain are specifically devoted to identifying particular types of visual imagery, such as faces or hands, a 2012 paper by Stanford and Google computer scientists showed some progress in getting a neural network to identify faces without priming data, among images that both did and did not contain faces. Nowhere in the programming was any explicit designation made of what constituted a “face.” The network evolved this category on its own. It did the same for “cat faces” and “human bodies” with similar success rates (about 80 percent).

While the successes behind subsymbolic artificial intelligence are impressive, there is a catch that is very nearly Faustian: The terms of success may prohibit any insight into how thinking “works,” but instead will confirm that there is no secret to be had—at least not in the way that we’ve historically conceived of it. It is increasingly clear that the Cartesian model is nothing more than a convenient abstraction, a shorthand for irreducibly complex operations that somehow (we don’t know how) give the appearance, both to ourselves and to others, of thinking. New models for artificial intelligence ask us to, in the words of philosopher Thomas Metzinger, rid ourselves of an “Ego Tunnel,” and understand that, while our sense of self dominates our thoughts, it does not dominate our brains.

Instead of locating where in our brains we have the concept of “face,” we have made a computer whose code also seems to lack the concept of “face.” Surprisingly, this approach succeeds where others have failed, giving the computer an inkling of the very idea whose explicit definition we gave up on trying to communicate. In moving out of our preconceived notion of the home of thought, we have gained in proportion not just a new level of artificial intelligence, but perhaps also a kind of self-knowledge.

David Auerbach is a writer and software engineer who lives in New York. He writes the Bitwise column for Slate.