Thursday, August 28, 2014

Metaintegral Academy - Vital Skills for Thriving in a Wild, Complex World: A Free Online Mini-Course for Change Leaders

Passing this along for those who may be interested - The topic is interesting and there are some good people presenting.
 
We are excited to announce a new—free—four-session online course:

Vital Skills for Thriving in a Wild, Complex World A Free Online Mini-Course for Change Leaders October 13-16, 2014

It’s a truism that the world and its problems are growing more complex every day. It’s also true that effective solutions aren’t keeping pace and that it’s harder than ever for change leaders to have impact. Ominously, the gap is widening. What does this portend for the future?

Since its founding in 2011, MetaIntegral Academy has devoted itself to this question with a big YES. Yes to the perspective that our future is bright. Yes to our ability to create solutions to our challenges. And yes to our ability to nurture future leaders and new leadership capacities.

MetaIntegral Academy creates programs that help change leaders unleashed their deeper potentials and in order to really become the change they want to see in the world. As a result of the success of our EPC program we have been looking for ways to share some of its essence with a wider audience. We've had many requests to share this material so we came up with this mini-course as a way of doing just that.

Here’s a brief description of the course modules:
  1. Power and Grace: Using Complexity Thinking and Intuitive Inquiry to Navigate These Turbulent Times, with Barrett C. Brown
  2. Thriving in the Flow through Action Inquiry, with Jesse McKay and Danielle Conroy
  3. From Taking a Stand to Inspiring Shared Action: The Art of Integral Enrollment, with Sean Esbjörn-Hargens and Dana Carman
  4. Integration: Putting It All Together to Thrive in a Wild, Complex World, with faculty from Modules 1-3. This course is for change leaders at any level—local, national, global—who want to enact a vision, express their unique talents more effectively, and enact their potential to do more and be more.
For course details and to register for free, click here.

Help us get the word out!

Researchers Investigate Novel Approaches to Reducing Negative Memories

Two new studies hit the news this on Wednesday, both of which involve changing the emotional impact of memories.

The first was a joint project between MIT and Howard Hughes Medical Institute researchers. We'll start with the press release from MIT, a study that uses optogenetics (light stimulation) to alter emotional connections with memories:

Neuroscientists reverse memories' emotional associations: Brain circuit that links feelings to memories manipulated

Date: August 27, 2014
Source: Massachusetts Institute of Technology

Summary:
Most memories have some kind of emotion associated with them: Recalling the week you just spent at the beach probably makes you feel happy, while reflecting on being bullied provokes more negative feelings. A new study from neuroscientists reveals the brain circuit that controls how memories become linked with positive or negative emotions.

This image depicts the injection sites and the expression of the viral constructs in the two areas of the brain studied: the Dentate Gyrus of the hippocampus (middle) and the Basolateral Amygdala (bottom corners). Credit: Image courtesy of the researchers

Most memories have some kind of emotion associated with them: Recalling the week you just spent at the beach probably makes you feel happy, while reflecting on being bullied provokes more negative feelings.

A new study from MIT neuroscientists reveals the brain circuit that controls how memories become linked with positive or negative emotions. Furthermore, the researchers found that they could reverse the emotional association of specific memories by manipulating brain cells with optogenetics -- a technique that uses light to control neuron activity.

The findings, described in the Aug. 27 issue of Nature, demonstrated that a neuronal circuit connecting the hippocampus and the amygdala plays a critical role in associating emotion with memory. This circuit could offer a target for new drugs to help treat conditions such as post-traumatic stress disorder, the researchers say.

"In the future, one may be able to develop methods that help people to remember positive memories more strongly than negative ones," says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience, director of the RIKEN-MIT Center for Neural Circuit Genetics at MIT's Picower Institute for Learning and Memory, and senior author of the paper.

The paper's lead authors are Roger Redondo, a Howard Hughes Medical Institute postdoc at MIT, and Joshua Kim, a graduate student in MIT's Department of Biology.

Shifting memories

Memories are made of many elements, which are stored in different parts of the brain. A memory's context, including information about the location where the event took place, is stored in cells of the hippocampus, while emotions linked to that memory are found in the amygdala.

Previous research has shown that many aspects of memory, including emotional associations, are malleable. Psychotherapists have taken advantage of this to help patients suffering from depression and post-traumatic stress disorder, but the neural circuitry underlying such malleability is not known.

In this study, the researchers set out to explore that malleability with an experimental technique they recently devised that allows them to tag neurons that encode a specific memory, or engram. To achieve this, they label hippocampal cells that are turned on during memory formation with a light-sensitive protein called channelrhodopsin. From that point on, any time those cells are activated with light, the mice recall the memory encoded by that group of cells.

Last year, Tonegawa's lab used this technique to implant, or "incept," false memories in mice by reactivating engrams while the mice were undergoing a different experience. In the new study, the researchers wanted to investigate how the context of a memory becomes linked to a particular emotion. First, they used their engram-labeling protocol to tag neurons associated with either a rewarding experience (for male mice, socializing with a female mouse) or an unpleasant experience (a mild electrical shock). In this first set of experiments, the researchers labeled memory cells in a part of the hippocampus called the dentate gyrus.

Two days later, the mice were placed into a large rectangular arena. For three minutes, the researchers recorded which half of the arena the mice naturally preferred. Then, for mice that had received the fear conditioning, the researchers stimulated the labeled cells in the dentate gyrus with light whenever the mice went into the preferred side. The mice soon began avoiding that area, showing that the reactivation of the fear memory had been successful.

The reward memory could also be reactivated: For mice that were reward-conditioned, the researchers stimulated them with light whenever they went into the less-preferred side, and they soon began to spend more time there, recalling the pleasant memory.

A couple of days later, the researchers tried to reverse the mice's emotional responses. For male mice that had originally received the fear conditioning, they activated the memory cells involved in the fear memory with light for 12 minutes while the mice spent time with female mice. For mice that had initially received the reward conditioning, memory cells were activated while they received mild electric shocks.

Next, the researchers again put the mice in the large two-zone arena. This time, the mice that had originally been conditioned with fear and had avoided the side of the chamber where their hippocampal cells were activated by the laser now began to spend more time in that side when their hippocampal cells were activated, showing that a pleasant association had replaced the fearful one. This reversal also took place in mice that went from reward to fear conditioning.

Altered connections

The researchers then performed the same set of experiments but labeled memory cells in the basolateral amygdala, a region involved in processing emotions. This time, they could not induce a switch by reactivating those cells -- the mice continued to behave as they had been conditioned when the memory cells were first labeled.

This suggests that emotional associations, also called valences, are encoded somewhere in the neural circuitry that connects the dentate gyrus to the amygdala, the researchers say. A fearful experience strengthens the connections between the hippocampal engram and fear-encoding cells in the amygdala, but that connection can be weakened later on as new connections are formed between the hippocampus and amygdala cells that encode positive associations.

"That plasticity of the connection between the hippocampus and the amygdala plays a crucial role in the switching of the valence of the memory," Tonegawa says.

These results indicate that while dentate gyrus cells are neutral with respect to emotion, individual amygdala cells are precommitted to encode fear or reward memory. The researchers are now trying to discover molecular signatures of these two types of amygdala cells. They are also investigating whether reactivating pleasant memories has any effect on depression, in hopes of identifying new targets for drugs to treat depression and post-traumatic stress disorder.

David Anderson, a professor of biology at the California Institute of Technology, says the study makes an important contribution to neuroscientists' fundamental understanding of the brain and also has potential implications for treating mental illness.

"This is a tour de force of modern molecular-biology-based methods for analyzing processes, such as learning and memory, at the neural-circuitry level. It's one of the most sophisticated studies of this type that I've seen," he says.

The research was funded by the RIKEN Brain Science Institute, Howard Hughes Medical Institute, and the JPB Foundation.

Story Source:
The above story is based on materials provided by Massachusetts Institute of Technology. The original article was written by Anne Trafton. Note: Materials may be edited for content and length.

Journal Reference:
Redondo RL, Kim J, Arons AL, Ramirez S, Liu X, Tonegawa S. (2014, Aug 27). Bidirectional switch of the valence associated with a hippocampal contextual memory engram. Nature; DOI: 10.1038/nature13725

* * * * *

Here is the abstract for the Nature article, which is pay-walled, of course.

Bidirectional switch of the valence associated with a hippocampal contextual memory engram

Roger L. Redondo, Joshua Kim, Autumn L. Arons, Steve Ramirez, Xu Liu & Susumu Tonegawa

Nature (2014) doi:10.1038/nature13725 Published online 27 August 2014

The valence of memories is malleable because of their intrinsic reconstructive property1. This property of memory has been used clinically to treat maladaptive behaviours2. However, the neuronal mechanisms and brain circuits that enable the switching of the valence of memories remain largely unknown. Here we investigated these mechanisms by applying the recently developed memory engram cell- manipulation technique3, 4. We labelled with channelrhodopsin-2 (ChR2) a population of cells in either the dorsal dentate gyrus (DG) of the hippocampus or the basolateral complex of the amygdala (BLA) that were specifically activated during contextual fear or reward conditioning. Both groups of fear-conditioned mice displayed aversive light-dependent responses in an optogenetic place avoidance test, whereas both DG- and BLA-labelled mice that underwent reward conditioning exhibited an appetitive response in an optogenetic place preference test. Next, in an attempt to reverse the valence of memory within a subject, mice whose DG or BLA engram had initially been labelled by contextual fear or reward conditioning were subjected to a second conditioning of the opposite valence while their original DG or BLA engram was reactivated by blue light. Subsequent optogenetic place avoidance and preference tests revealed that although the DG-engram group displayed a response indicating a switch of the memory valence, the BLA-engram group did not. This switch was also evident at the cellular level by a change in functional connectivity between DG engram-bearing cells and BLA engram-bearing cells. Thus, we found that in the DG, the neurons carrying the memory engram of a given neutral context have plasticity such that the valence of a conditioned response evoked by their reactivation can be reversed by re-associating this contextual memory engram with a new unconditioned stimulus of an opposite valence. Our present work provides new insight into the functional neural circuits underlying the malleability of emotional memory.

References:
  1. Pavlov, I. P. Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex (Oxford Univ. Press, 1927)
  2. Wolpe, J. Psychotherapy by Reciprocal Inhibition (Stanford Univ. Press, 1958)
  3. Liu, X. et al. Optogenetic stimulation of a hippocampal engram activates fear memory recall. Nature 484, 381385 (2012)
  4. Ramirez, S. et al. Creating a false memory in the hippocampus. Science 341, 387391 (2013)
* * * * *

The second study comes from researchers at Harvard University who are using Xenon gas to remove the emotional context from traumatic memories. This is a murine study, but the results suggest further research will be coming.

Xenon gas is already being used for general anesthetic with fewer side effects and actually providing some cardioprotection and neuorprotection. From Wikipedia:
Xenon is a high-affinity glycine-site NMDA receptor antagonist.[129] However, xenon distinguishes itself from other clinically used NMDA receptor antagonists in its lack of neurotoxicity and its ability to inhibit the neurotoxicity of ketamine and nitrous oxide.[130][131] Unlike ketamine and nitrous oxide, xenon does not stimulate a dopamine efflux from the nucleus accumbens.[132]
First up the press release from Harvard (the Harvard Gazette) and then the abstract and introduction from PLOS ONE, the open access publication platform for science.

Erasing traumatic memories

Xenon exposure may be potential new treatment for people with PTSD

August 27, 2014 | Editor's Pick


By Scott O’Brien, McLean Hospital Communications

Researchers at Harvard-affiliated McLean Hospital are reporting that xenon gas, used in humans for anesthesia and diagnostic imaging, has the potential to become a treatment for post-traumatic stress disorder (PTSD) and other memory-related disorders.

“In our study, we found that xenon gas has the capability of reducing memories of traumatic events,” said Edward G. Meloni, assistant psychologist at McLean and an assistant professor of psychiatry at Harvard Medical School (HMS). “It’s an exciting breakthrough.”

In the study, published in the current issue of PLOS ONE, Meloni and HMS Associate Professor of Psychiatry Marc J. Kaufman, director of the Translational Imaging Laboratory at McLean, examined whether a low concentration of xenon gas could interfere with a process called reconsolidation — a state in which reactivated memories become susceptible to modification. “We know from previous research that each time an emotional memory is recalled, the brain actually re-stores it as if it were a new memory. With this knowledge, we decided to see whether we could alter the process by introducing xenon gas immediately after a fear memory was reactivated,” explained Meloni.


Statistics show an increase in PTSD diagnoses among the military. Harvard researchers are investigating a potential breakthrough that would treat symptoms associated with PTSD. Credit: Congressional Research Service PTSD data/McLean Hospital
The investigators used an animal model of PTSD called fear conditioning to train rats to be afraid of environmental cues that were paired with brief foot shocks. Reactivating the fearful memory was done by exposing the rats to those same cues and measuring their freezing response as a readout of fear. “We found that a single exposure to the gas, which is known to block NMDA receptors involved in memory formation in the brain, dramatically and persistently reduced fear responses for up to two weeks. It was as though the animals no longer remembered to be afraid of those cues,” said Meloni.

Meloni points out that the inherent properties of a gas such as xenon make it especially attractive for targeting dynamic processes like memory reconsolidation. “Unlike other drugs or medications that may also block NMDA receptors involved in memory, xenon gets in and out of the brain very quickly. This suggests that xenon could be given at the exact time the memory is reactivated, and for a limited amount of time, which may be key features for any potential therapy used in humans.”

“The fact that we were able to inhibit remembering of a traumatic memory with xenon is very promising because it is currently used in humans for other purposes, and thus it could be repurposed to treat PTSD,” added Kaufman.

For these investigators, several questions remain to be addressed with further testing. “From here we want to explore whether lower xenon doses or shorter exposure times would also block memory reconsolidation and the expression of fear. We’d also like to know if xenon is as effective at reducing traumatic memories from past events, so-called remote memories, versus the newly formed ones we tested in our study.”

Meloni and Kaufman indicate that future studies are planned to test whether the effects of xenon in rats that they saw in their study translate to humans. Given that intrusive re-experiencing of traumatic memories — including flashbacks, nightmares, and distress and physiological reactions induced by with trauma reminders — is a hallmark symptom for many who suffer from PTSD, a treatment that alleviates the impact of those painful memories could provide welcome relief.

The study may be viewed on the PLOS ONE website.
* * * * *

Xenon Impairs Reconsolidation of Fear Memories in a Rat Model of Post-Traumatic Stress Disorder (PTSD)


Edward G. Meloni, Timothy E. Gillis, Jasmine Manoukian, Marc J. Kaufman

Abstract

Xenon (Xe) is a noble gas that has been developed for use in people as an inhalational anesthestic and a diagnostic imaging agent. Xe inhibits glutamatergic N-methyl-D-aspartate (NMDA) receptors involved in learning and memory and can affect synaptic plasticity in the amygdala and hippocampus, two brain areas known to play a role in fear conditioning models of post-traumatic stress disorder (PTSD). Because glutamate receptors also have been shown to play a role in fear memory reconsolidation – a state in which recalled memories become susceptible to modification – we examined whether Xe administered after fear memory reactivation could affect subsequent expression of fear-like behavior (freezing) in rats. Male Sprague-Dawley rats were trained for contextual and cued fear conditioning and the effects of inhaled Xe (25%, 1 hr) on fear memory reconsolidation were tested using conditioned freezing measured days or weeks after reactivation/Xe administration. Xe administration immediately after fear memory reactivation significantly reduced conditioned freezing when tested 48 h, 96 h or 18 d after reactivation/Xe administration. Xe did not affect freezing when treatment was delayed until 2 h after reactivation or when administered in the absence of fear memory reactivation. These data suggest that Xe substantially and persistently inhibits memory reconsolidation in a reactivation and time-dependent manner, that it could be used as a new research tool to characterize reconsolidation and other memory processes, and that it could be developed to treat people with PTSD and other disorders related to emotional memory.
Full Citation: 
Meloni EG, Gillis TE, Manoukian J, Kaufman MJ. (2014. Aug 27). Xenon Impairs Reconsolidation of Fear Memories in a Rat Model of Post-Traumatic Stress Disorder (PTSD). PLoS ONE 9(8): e106189. doi:10.1371/journal.pone.0106189

Introduction

Mitigation of persistent, intrusive, traumatic memories experienced by people with post-traumatic stress disorder (PTSD) remains a key therapeutic challenge [1]. Behavioral treatments such as extinction training – administered alone or in combination with cognitive-enhancing drugs (e.g. d-cycloserine) – attempt to inhibit underlying traumatic memories by facilitating a new set of learning contingencies, but often achieve limited success [2]. Another learning and memory phenomenon known as reconsolidation, a process by which reactivated (retrieved) memories temporarily enter a labile state (the reconsolidation window), has been studied to determine whether drug or behavioral interventions can prevent a traumatic memory trace from being re-incorporated back into the neural engram, inhibiting the memory [3][6]. Several chemical agents have been found to inhibit fear memory reconsolidation in animals [7] but unfortunately do not translate well to humans, limiting their clinical use. They either are toxic (e.g. protein synthesis inhibitors), induce unwanted side effects, are slow acting such that brain drug concentrations peak outside of the reconsolidation window, or are slowly eliminated such that they interfere with later onset memory processes including extinction [8]. A recent human study documented that a single electroconvulsive therapy (ECT) treatment administered to unipolar depressed subjects immediately after emotional memory reactivation disrupted reconsolidation, confirming that reconsolidation occurs in humans and that it can be inhibited by a brief treatment [9]. While ECT is indicated for therapeutic use in people with treatment-resistant major depression, it may not be a viable treatment for other clinical populations. Thus, there is a significant unmet need for a minimally invasive, safe and well-tolerated treatment that can be used clinically to inhibit fear memory reconsolidation in people with PTSD.

The noble gas xenon (Xe) inhibits glutamatergic N-methyl-D-aspartate (NMDA) receptors [10] known to play a role in memory reconsolidation [11]. Xe reduces NMDA-mediated synaptic currents and neuronal plasticity in the basolateral amygdala and CA1 region of the hippocampus [12], [13]; these brain areas are involved in Pavlovian fear conditioning, an animal model of PTSD used to elucidate learning and memory processes, including reconsolidation [14][16]. Xe already is used in humans at high concentration (>50%) as an anesthetic and at subsedative concentration (28%) as a diagnostic imaging agent; in both applications, Xe has excellent safety/side effect profiles and is well tolerated [17][19]. Further, NMDA receptor glycine antagonists like Xe [10] do not appear to have significant abuse liability and do not induce psychosis [20], [21], consistent with clinical experience [18], [19]. Thus, Xe has a number of favorable properties that might be beneficial for treating fear memory disorders. As fear memory reconsolidation is an “evolutionarily conserved memory-update mechanism” [5], we evaluated in rats whether administering a subsedative concentration of Xe (maximum concentration 25%, 1 h) via inhalation following conditioned fear memory reactivation could reduce subsequent expression of fear-like behavior. Here, we report that Xe impaired reconsolidation of fear memory demonstrated as a reduction in conditioned freezing, a behavioral readout used to measure fear in animals.

Neuroscience’s New Toolbox - Optogenetics

https://www.technologyreview.com/sites/default/files/images/toolboxx960.jpg

New technology, like optogenetics, is making brain imaging much more precise. I remain unconvinced that pretty pictures of the brain, even highly detailed 3-dimensional images, will reveal the secrets of emotions or consciousness, what it is like to experience red, or what it feels like to be a bat. The technology and the pictures are pretty cool, though.

Via the MIT Technology Review.

Neuroscience’s New Toolbox

With the invention of optogenetics and other technologies, researchers can investigate the source of emotions, memory, and consciousness for the first time.


By Stephen S. Hall on June 17, 2014
Sculpture by Joshua Harker
Why It Matters
A better understanding of how memories, emotions, and cognition work in the brain could lead to ways to improve and manipulate such functions.
What might be called the “make love, not war” branch of behavioral neuroscience began to take shape in (where else?) California several years ago, when researchers in David J. Anderson’s laboratory at Caltech decided to tackle the biology of aggression. They initiated the line of research by orchestrating the murine version of Fight Night: they goaded male mice into tangling with rival males and then, with painstaking molecular detective work, zeroed in on a smattering of cells in the hypothalamus that became active when the mice started to fight.

The hypothalamus is a small structure deep in the brain that, among other functions, co√∂rdinates sensory inputs—the appearance of a rival, for example—with instinctual behavioral responses. Back in the 1920s, Walter Hess of the University of Zurich (who would win a Nobel in 1949) had shown that if you stuck an electrode into the brain of a cat and electrically stimulated certain regions of the hypothalamus, you could turn a purring feline into a furry blur of aggression. Several interesting hypotheses tried to explain how and why that happened, but there was no way to test them. Like a lot of fundamental questions in brain science, the mystery of aggression didn’t go away over the past century—it just hit the usual empirical roadblocks. We had good questions but no technology to get at the answers.

By 2010, Anderson’s Caltech lab had begun to tease apart the underlying mechanisms and neural circuitry of aggression in their pugnacious mice. Armed with a series of new technologies that allowed them to focus on individual clumps of cells within brain regions, they stumbled onto a surprising anatomical discovery: the tiny part of the hypothalamus that seemed correlated with aggressive behavior was intertwined with the part associated with the impulse to mate. That small duchy of cells—the technical name is the ventromedial hypothalamus—turned out to be an assembly of roughly 5,000 neurons, all marbled together, some of them seemingly connected to copulating and others to fighting.

“There’s no such thing as a generic neuron,” says Anderson, who estimates that there may be up to 10,000 distinct classes of neurons in the brain. Even tiny regions of the brain contain a mixture, he says, and these neurons “often influence behavior in different, opposing directions.” In the case of the hypothalamus, some of the neurons seemed to become active during aggressive behavior, some of them during mating behavior, and a small subset—about 20 percent—during both fighting and mating.

That was a provocative discovery, but it was also a relic of old-style neuroscience. Being active was not the same as causing the behavior; it was just a correlation. How did the scientists know for sure what was triggering the behavior? Could they provoke a mouse to pick a fight simply by tickling a few cells in the hypothalamus?

A decade ago, that would have been technologically impossible. But in the last 10 years, neuroscience has been transformed by a remarkable new technology called optogenetics, invented by scientists at Stanford University and first described in 2005. The Caltech researchers were able to insert a genetically modified light-sensitive gene into specific cells at particular locations in the brain of a living, breathing, feisty, and occasionally canoodling male mouse. Using a hair-thin fiber-optic thread inserted into that living brain, they could then turn the neurons in the hypothalamus on and off with a burst of light.


Optogenetics: Light Switches for Neurons

Anderson and his colleagues used optogenetics to produce a video dramatizing the love-hate tensions deep within rodents. It shows a male mouse doing what comes naturally, mating with a female, until the Caltech researchers switch on the light, at which instant the murine lothario flies into a rage. When the light is on, even a mild-mannered male mouse can be induced to attack whatever target happens to be nearby—his reproductive partner, another male mouse, a castrated male (normally not perceived as a threat), or, most improbably, a rubber glove dropped into the cage.

“Activating these neurons with optogenetic techniques is sufficient to activate aggressive behavior not only toward appropriate targets like another male mouse but also toward inappropriate targets, like females and even inanimate objects,” Anderson says. Conversely, researchers can inhibit these neurons in the middle of a fight by turning the light off, he says: “You can stop the fight dead in its tracks.”

Moreover, the research suggests that lovemaking overrides war-making in the calculus of behavior: the closer a mouse was to consummation of the reproductive act, the more resistant (or oblivious) he became to the light pulses that normally triggered aggression. In a paper published in Biological Psychiatry, titled “Optogenetics, Sex, and Violence in the Brain: Implications for Psychiatry,” Anderson noted, “Perhaps the imperative to ‘make love, not war’ is hard-wired into our nervous system, to a greater extent than we have realized.” We may be both lovers and fighters, with the slimmest of neurological distances separating the two impulses.

No one is suggesting that we’re on the verge of deploying neural circuit breakers to curb aggressive behavior. But, as Anderson points out, the research highlights a larger point about how a new technology can reinvent the way brain science is done. “The ability of optogenetics to turn a largely correlational field of science into one that tests causation has been transformative,” he says.

What’s radical about the technique is that it allows scientists to perturb a cell or a network of cells with exquisite precision, the key to sketching out the circuitry that affects various types of behavior. Whereas older technologies like imaging allowed researchers to watch the brain in action, optogenetics enables them to influence that action, tinkering with specific parts of the brain at specific times to see what happens.

And optogenetics is just one of a suite of revolutionary new tools that are likely to play leading roles in what looks like a heyday for neuroscience. Major initiatives in both the United States and Europe aspire to understand how the human brain—that tangled three-pound curd of neurons, connective tissue, and circuits—gives rise to everything from abstract thought to basic sensory processing to emotions like aggression. Consciousness, free will, memory, learning—they are all on the table now, as researchers use these tools to investigate how the brain achieves its seemingly mysterious effects (see “Searching for the “Free Will” Neuron”).

Connections
More than 2,000 years ago, Hippocrates noted that if you want to understand the mind, you must begin by studying the brain. Nothing has happened in the last two millennia to change that imperative—except the tools that neuroscience is bringing to the task.

The history of neuroscience, like the history of science itself, is often a story of new devices and new technologies. ­Luigi Galvani’s first accidental electrode, which provoked the twitch of a frog’s muscle, has inspired every subsequent electrical probe, from ­Walter Hess’s cat prod to the current therapeutic use of deep brain stimulation to treat Parkinson’s disease (approximately 30,000 people worldwide now have electrodes implanted in their brains to treat this condition). The patch clamp allowed neuroanatomists to see the ebb and flow of ions in a neuron as it prepares to fire. And little did Paul Lauterbur realize, when he focused a strong magnetic field on a single hapless clam in his lab at the State University of New York at Stony Brook in the early 1970s, that he and his colleagues were laying the groundwork for the magnetic resonance imaging (MRI) machines that have helped reveal the internal landscape and activity of a living brain.


Growing Neurons: Studying What Goes Wrong

But it is the advances in genetics and genomic tools during the last few years that have truly revolutionized neuroscience. Those advances made the genetic manipulations at the heart of optogenetics possible. Even more recent genome-editing methods can be used to precisely alter the genetics of living cells in the lab. Along with optogenetics, these tools mean scientists can begin to pinpoint the function of the thousands of different types of nerve cells among the roughly 86 billion in the human brain.

Nothing testifies to the value of a new technology more than the number of scientists who rapidly adopt it and use it to claim new scientific territories. As Edward Boyden, a scientist at MIT who helped develop optogenetics, puts it, “Often when a new technology comes out, there’s a bit of a land grab.”

And even as researchers grab those opportunities in genomics and optogenetics, still other advances are coming on the scene. A new chemical treatment is making it possible to directly see nerve fibers in mammalian brains; robotic microelectrodes can eavesdrop on (and perturb) single cells in living animals; and more sophisticated imaging techniques let researchers match up nerve cells and fibers in brain slices to create a three-dimensional map of the connections. Using these tools together to build up an understanding of the brain’s activity, scientists hope to capture the biggest of cognitive game: memory, decision-­making, consciousness, psychiatric illnesses like anxiety and depression, and, yes, sex and violence.

In January 2013, the European Commission invested a billion euros in the launch of its Human Brain Project, a 10-year initiative to map out all the connections in the brain. Several months later, in April 2013, the Obama administration announced an initiative called Brain Research through Advanced Innovative Neurotechnologies (BRAIN), which is expected to pour as much as $1 billion into the field, with much of the early funding earmarked for technology development. Then there is the Human Connectome Project, which aims to use electron microscope images of sequential slices of brain tissue to map nerve cells and their connections in three dimensions. Complementary connectome and mapping initiatives are getting under way at the Howard Hughes Medical Institute in Virginia and the Allen Institute for Brain Science in Seattle. They are all part of a large global effort, both publicly and privately funded, to build a comprehensive picture of the human brain, from the level of genes and cells to that of connections and circuits.

Last December, as an initial step in the BRAIN Initiative, the National Institutes of Health solicited proposals for $40 million worth of projects on technology development in the neurosciences. “Why is the BRAIN Initiative putting such a heavy emphasis on technology?” says Cornelia Bargmann, the Rockefeller University neuroscientist who co-directs the planning process for the project. “The real goal is to understand how the brain works, at many levels, in space and time, in many different neurons, all at once. And what’s prevented us from understanding that is limitations in technology.”

Eavesdropping

Optogenetics had its origins in 2000, in late-night chitchat at Stanford University. There, neuroscientists Karl Deisseroth and Edward Boyden began to bounce ideas back and forth about ways to identify, and ultimately manipulate, the activity of specific brain circuits. Deisseroth, who had a PhD in neuroscience from Stanford, longed to understand (and someday treat) the mental afflictions that have vexed humankind since the era of Hippocrates, notably anxiety and depression (see “Shining Light on Madness”). Boyden, who was pursuing graduate work in brain function, had an omnivorous curiosity about neurotechnology. At first they dreamed about deploying tiny magnetic beads as a way to manipulate brain function in intact, living animals. But at some point during the next five years, a different kind of light bulb went off.

Since the 1970s, microbiologists had been studying a class of light-sensitive molecules known as rhodopsins, which had been identified in simple organisms like bacteria, fungi, and algae. These proteins act like gatekeepers along the cell wall; when they detect a particular wavelength of light, they either let ions into a cell or, conversely, let ions out of it. This ebb and flow of ions mirrors the process by which a neuron fires: the electrical charge within the nerve cell builds up until the cell unleashes a spike of electrical activity flowing along the length of its fiber (or axon) to the synapses, where the message is passed on to the next cell in the pathway. Scientists speculated that if you could smuggle the gene for one of these light-sensitive proteins into a neuron and then pulse the cell with light, you might trigger it to fire. Simply put, you could turn specific neurons in a conscious animal on—or off—with a burst of light.

In 2004, Deisseroth successfully inserted the gene for a light-sensitive molecule from algae into mammalian neurons in a dish. Deisseroth and ­Boyden went on to show that blue light could induce the neurons to fire. At about the same time, a graduate student named Feng Zhang joined Deisseroth’s lab. Zhang, who had acquired a precocious expertise in the techniques of both molecular biology and gene therapy as a high school student in Des Moines, Iowa, showed that the gene for the desired protein could be introduced into neurons by means of genetically engineered viruses. Again using pulses of blue light, the Stanford team then demonstrated that it could turn electrical pulses on and off in the virus-modified mammalian nerve cells. In a landmark paper that appeared in Nature Neuroscience in 2005 (after, Boyden says, it was rejected by Science), Deisseroth, Zhang, and Boyden described the technique. (No one would call it “optogenetics” for another year.)

Neuroscientists immediately seized on the power of the technique by inserting light-sensitive genes into living animals. Researchers in Deisseroth’s own lab used it to identify new pathways that control anxiety in mice, and both ­Deisseroth’s team and his collaborators at Mount Sinai Hospital in New York used it to turn depression on and off in rats and mice. And Susumu Tonegawa’s lab at MIT recently used optogenetics to create false memories in laboratory animals.

When I visited Boyden’s office at MIT’s Media Lab last December, the scientist called up his favorite recent papers involving optogenetics. In a rush of words as rapid as his keystrokes, Boyden described second-generation technologies already being developed. One involves eavesdropping on single nerve cells in anesthetized and conscious animals in order to see “the things roiling underneath the sea of activity” within a neuron when the animal is unconscious. Boyden said, “It literally sheds light on what it means to have thoughts and awareness and feelings.”

Boyden’s group had also just sent off a paper reporting a new twist on optogenetics: separate, independent neural pathways can be perturbed simultaneously with red and blue wavelengths of light. The technique has the potential to show how different circuits interact with and influence each other. His group is also working on “insanely dense” recording probes and microscopes that aspire to capture whole-brain activity. The ambitions are not modest. “Can you record all the cells in the brain,” he says, “so that you can watch thoughts or decisions or other complex phenomena emerge as you go from sensation to emotion to decision to action site?”


Brain Mapping: Charting the Information Superhighways

A few blocks away, Feng Zhang, who is now an assistant professor at MIT and a faculty member at the Broad Institute, listed age-old neuroscience questions that might now be attacked with the new technologies. “Can you do a memory upgrade and increase the capacity?” he asked. “How are neural circuits genetically encoded? How can you reprogram the genetic instructions? How do you fix the genetic mutations that cause miswiring or other malfunctions of the neural system? How do you make the old brain younger?”

In addition to helping to invent optogenetics, Zhang played a central role in developing a gene-editing technique called CRISPR (see “10 Breakthrough Technologies: Genome Editing,” May/June). The technology allows scientists to target a gene—in neurons, for example—and either delete or modify it. If it’s modified to include a mutation known or suspected to cause brain disorders, scientists can study the progression of those disorders in lab animals. Alternatively, researchers can use CRISPR in the lab to alter stem cells that can then be grown into neurons to see the effects.

Transparency

Back at Stanford, when he’s not seeing patients with autism spectrum disorders or depression in the clinic, Deisseroth continues to invent tools that he and others can use to study these conditions. Last summer, his lab reported a new way for scientists to visualize the cables of nerve fibers, known as “white matter,” that connect distant precincts of the brain. The technique, called Clarity, first immobilizes biomolecules such as protein and DNA in a plastic-like mesh that retains the physical integrity of a postmortem brain. Then researchers flush a kind of detergent through the mesh to dissolve all the fats in brain tissue that normally block light. The brain is rendered transparent, suddenly exposing the entire three-­dimensional wiring pattern to view.

Together, the new tools are transforming many conventional views in neuroscience. For example, as Deisseroth noted in a review article published earlier this year in Nature, optogenetics has challenged some of the ideas underlying deep brain stimulation, which has been widely used to treat everything from tremors and epilepsy to anxiety and obsessive-­compulsive disorder. No one knows just why it works, but the operating assumption has been that its therapeutic effects derive from electrical stimulation of very specific brain regions; neurosurgeons exert extraordinary effort to place electrodes with the utmost precision.

In 2009, however, Deisseroth and colleagues showed that specifically stimulating the white matter, the neural cables that happen to lie near the electrodes, produced the most robust clinical improvement in symptoms of Parkinson’s disease. In other words, it wasn’t the neighborhood of the brain that mattered so much as which neural highways happened to pass nearby. Scientists often employ words like “surprising” and “unexpected” to characterize such recent results, reflecting the impact that optogenetics has had on the understanding of psychiatric illness.

In the same vein, Caltech’s Anderson points out that the public and scientific infatuation with functional MRI studies over the last two decades has created the impression that certain regions of the brain act as “centers” of neural activity—that the amygdala is the “center” of fear, for example, or the hypothalamus is the “center” of aggression. But he likens fMRI to looking down on a nighttime landscape from an airplane at 30,000 feet and “trying to figure out what is going on in a single town.” Optogenetics, by contrast, has provided a much more detailed view of that tiny subdivision of cells in the hypothalamus, and thus a much more complex and nuanced picture of aggression. Activating specific neurons in that little town can tip an organism to make war, but activating the neurons next door can nudge it to make love.

The new techniques will give scientists the first glimpses of human cognition in action—a look at how thoughts, feelings, forebodings, and dysfunctional mental activity arise from the neural circuitry and from the activity of particular types of cells. Researchers are just beginning to gain these insights, but given the recent pace of technology development, the picture might emerge sooner than anyone dreamed possible when the light of optogenetics first flickered on a few years ago.

~ Stephen S. Hall is a science writer and author in New York City. His last feature for MIT Technology Review was “Repairing Bad Memories.”

Wednesday, August 27, 2014

Got Tylenol? One Of The Most Dangerous Drugs Is Probably In Your Medicine Cabinet Right Now

This deadly drug, a leading cause of liver failure, is not in our medicine cabinet because I have been arguing to pull Tylenol from the market for years and I make sure we do not have products with that poison in our house.

This comes from Urban Times.

One Of The Most Dangerous Drugs Is Probably In Your Medicine Cabinet Right Now

And you don't even need a prescription to get it.


27th August 2014
Abby Norman
  • Got a headache? Pop a Tylenol.
  • Pulled a muscle? How about a Tylenol?
  • Menstrual cramps, aches and pains, fever, sniffles? There’s a Tylenol for that.

As it is marketed in the United States.

But before you shake the bottle, there are few things you should know…


Other common brands/packaging

Acetaminophen, also known as paracetamol, mapap and any number of combination drugs like Alka Seltzer Cold and Sinus, Nyquil Cold and Flu, Percocet and Excedrin, is a pain reliever and fever reducer. Unlike some pain relieving drugs that get a bad rap for addiction, Tylenol and other acetaminophen-containing drugs are non-opioid. This has made it widely over-the-counter and formulated for infants, children and adults. Tylenol has long been so widely accepted that some people even take it preventatively as part of a medication regimen.

The mechanism of acetaminophen is actually pretty interesting. Essentially, it blocks the enzyme responsible for sending out prostaglandins, the lipid compounds that cause pain when your cells become injured. So, you take a Tylenol, it blocks the enzyme that creates those painful lipids, and boom; relief.


But there’s a catch: it’s incredibly easy to take too much acetaminophen.

There are several reasons for this. One, it’s so widely regarded as “safe” that people often assume they can take higher than the recommended dose without ill-effects and two, it’s found in many more drugs than just Tylenol, making it easy to overdose unintentionally.

The biggest threat of such an overdose is liver damage. For many years, the FDA has known that long term use, even at low doses, has been linked to liver damage, but the extent of the damage was not well understood. Now, the FDA has urged doctors to limit their prescriptions of the drug to help prevent these complications.


The liver is so at-risk because it’s the Brita-filter of our body: anything that you put into your body passes through the liver where “toxins and crap” get filtered out. This is true of everything from compounds in medication to alcohol. The problem with acetaminophen is that when it gets broken down by the liver, one of the compounds it leaves behind, NAPQI, builds up and damages liver cells. The more acetaminophen your liver processes, the more NAPQI gets left behind.

It’s like continuously pouring a tiny bit of muddy water through a coffee filter; it doesn’t take long before the paper is coated with sludge.


The FDA is right to be concerned. Often the early stages of liver damage are hard to diagnose because the symptoms can be very vague; loss of appetite, nausea and maybe a little jaundice eventually. Once a liver is truly damaged, the only option is a liver transplant.

In addition to complicates [coined here first] from toxicity, there is new research that claims that acetaminophen may actually help influenza spread faster. If you’re down and out with the flu and you take acetaminophen to reduce your fever, there are two mechanisms that the study claims help to hasten the spread of the flu: first, the sooner you are fever-free, the sooner you end up back in public interacting with friends, family and co-workers, therefore spreading the illness unwittingly because, since you feel better, you forget you’re still contagious. And secondly, there is research that indicates that acetaminophen use increases the amount of the virus that you shed, meaning you send more infectious particles out into the world when you cough or sneeze. The research isn’t precise, and was done predominantly with mathematical inferences, but it does bring up another important investigation into the overuse of acetaminophen in western medicine.


The big take away?

Overusing Tylenol, either by taking more than the recommended dose or mixing multiple drugs that contain acetaminophen does not carry benefits that outweigh the risk of liver damage. So, you’re better off having a slight headache than popping another Tylenol before that eight hour window is up and putting your liver at risk.

Yolles & Fink: Personality, Pathology and Mindsets (3 Parts)


I've only read the first part of this three-part series, but it seems interesting and somewhat aligned with some of my own views. There is little I can say about it that you can't read in the abstract and in the Introduction (included below).

Personality, Pathology and Mindsets: Part 1 – Agency, Personality and Mindscapes

Personality, Pathology and Mindsets: Part 2 – Cultural Traits and Enantiomers

Personality, Pathology and Mindsets: Part 3 – Pathologies and Corruption

  • Maurice Yolles - John Moores University - Centre for the Creation of Coherent Change and Knowledge (C4K)
  • Gerhard Fink - IACCM International Association for Cross Cultural Competence and Management

Kybernetes; 43(1): 92-112; 2014


Abstract:


Purpose – This paper aims to develop a new socio-cognitive theory of the normative personality of a plural agency like, for instance, an organisation or a political system. This cybernetic agency theory is connected to Bandura’s theory of psychosocial function. The agency is adaptive and has a normative personality that operates through three formative personality traits, the function of which is control. The cybernetic agency theory is presented as a meta-model, which comes from cybernetic "living systems" theory.

Design/methodology/approach – First, in this paper, the authors discuss the virtues of a normative cybernetic agency model in the light of issues related to normal states and pathologies of systems. Formative traits could be derived from Maruyama’s mindscape theory or Harvey’s typology.

However, Boje has noted that with four mindscape types Maruyama’s typology is constrained. Consequently, he projected the Maruyama mindscapes into a space with the three Foucault- imensions: knowledge, ethics and power.

Findings – The suggested cybernetic agency model with the three formative personality traits can provide a framing for a structural model that has the potential to distinguish between normal and abnormal personalities in the same framework.

Research limitations/implications – The constraints of the Maruyama mindscape space, as identified by Boje, are suggesting that further research is needed to identify a formative three-trait- ystem which is theory based, was empirically applied, and is permitting to create a typology with eight extreme types, yet to be identified.

Originality/value – The paper draws on earlier work undertaken in the last few years by the same authors, who in a new way are pursuing new directions and extensions of that earlier research.
Here is the Introduction to Part One of this article.

Introduction


Our interest in this paper lies in coherent adaptable agencies having personalities that drive their behaviour, and the nature and consequences of the pathologies that they experience. These agencies may be individuals or social collectives, the distinction between them being that the latter operates through normative attributes that are created by the social collective of individuals, and are self-regulated and cohesive through their adhesion to the norms of a given culture. Yolles (2009) has examined the social collective in terms of its “collective mind”, thereby relating more closely psychology and social psychology. As such he has explored the social collective in relation to its ability to behave as a singular cognitive entity, and just like an individual person has collective social psychological conditions that are equivalent to, but will have additional mechanisms from, the psychology of individuals. Both are agents of behaviour with cognitive processes, the former being composed of a collection of individuals that operates as a coherent unit through norms, and the latter a unitary person.


Having related the individual with the social, our focus centres on more complex but more or less transparent plural agencies, i.e. social collectives (e.g. organisations and nations). They are durable and develop behaviour by virtue of their cognitive and existential attributes. In such an agency individuals and their empirical psychology create collective cognitive and existential processes, and as such there is a broad relationship between the social psychology of collectives and the psychology of the individual. Agencies also can respond to their environments and adapt, and are able to survive despite the fact they are likely to have pathologies that impact on their operative capabilities. Agency pathologies are conditions of organisational ill-health that:

  • may be constituted as faults in the interconnection between the parts of the organisation;
  • are inhibiters of organisational coherence; and
  • can prevent the parts from autonomously regulating their collective existence.
They limit an agency from operating in a way that enables it to:
  • be effective;
  • implement its wants and needs; and
  • behave effectively.
Pathology can create lacks in an ability to perform properly through poor: (a) management; (b) procedures; (g) communications; and (d) the development of aspirations and motivation. Pathologies can more easily arise in an authoritarian management style in which there is:
  • a high degree of central control;
  • little communication;
  • single voice (mono-vocal) top-down management;
  • limited scope for individual initiative with an orientation towards obedience and the provision of orders;
  • centralised decision-making process that tend to be repetitive;
  • reluctance to start innovative processes;
  • high degrees of conformity; and
  • high level of resistance to change.
Pathologies occur when individuals and groups in a social agency are prevented from autonomously regulating their collective existence in a way that opposes their capacity to operate successfully and adapt. The pathologies may be considered to cause varying degrees of dysfunction, the degree determining the intensity or density of the pathologies being experienced. Such dysfunctions affect not only and agency’s orientation, but the way in which it operates as a whole. So pathology can be important in the provision of an understanding of why particular types of behaviour are manifested, and how they may be dealt with where they represent important degrees of ill-health or unfitness or dysfunction. Following Yolles (2009), two orientations of pathology can be identified: internal directed or endogenous and externally directed or exogenous. Endogenous pathologies can cause social abnormalities like neuroses and dysfunctions (Kets de Vries, 1991) that can interfere with the way the agency performs its tasks, while exogenous pathologies, i.e. those directed towards the environment, can result in sociopathic behaviour. Having appropriate agency models can improve our ability to understand the nature of the social psychology and can provide an entry into how its pathologies may be “treated”. One of our interests in this paper is therefore to develop a modelling approach for the collective agency that can lead to explanations of how the pathologies of an organisation can be harmful not only to the collective itself, but also potentially to the social environments in which they exist.

Normative personalities are seen to be social psychological entities. They are embedded within an agency that can be connected with an intensity or density of pathologies. Ordered personalities have a low density of pathologies, and very disordered personalities have a high density, the latter traditionally being seen as the result of an unbalanced personality (Sane, 2012). Janowsky et al. (1998) have studied individuals with personality disorders – that is, personalities with a relatively high density of pathologies. They have used at least two approaches to model these. They adopted the atheoretic pathology oriented Minnesota multiphasic personality inventory (MMPI) to identify psychopathology and personality structure, an approach according to Eisenman (2006) that is inadequate for use with “normal” people since it is designed to highlight pathologies and personality disorder. Janowsky et al. (2002) also used the Myers-Briggs type indicator (MBTI) (Myers Briggs, 2000), normally referred to as MBTI, to profile the personalities of in-patient alcoholics/substance-use disorder patients in two classes: those with a concurrent affective disorder diagnosis as well as those not so diagnosed. They found that there were correlations between some disorders and the type classes used in MBTI to distinguish individual differences in personalities of introversion, sensing and feeling preference. This suggests that it might be possible, given appropriate theory, to associate different personalities with pathology densities, a proposition that we are theoretically interested in here. In line with this interest, Markon et al. (2005) note that there is increasing evidence that normal and abnormal personality can be treated within a single structural framework, and according to O’Connor and Dyce (2001) that abnormal personality can be modelled as extremes of normal personality variation. For Markon et al. (2005) a proposed framework has remained elusive, and instead they discuss personality disorder in terms of a hierarchical trait structure which they relate to the personality taxonomy of the Big Five (Schroeder et al., 1992; John and Srivastava, 1991). The situation is even worse than this according to Mayer (2005), who argues that current theory in personality promotes a fragmented view of the person, seen through such competing theories as the psychodynamic, trait, and humanistic. As an alternative, he promotes a systems framework for personality that centres on its identification, its parts, its organization, and its development. As such, Mayer (2005) may be considered to lay the foundations for the systemic approach adopted here.


In the same way that there is a need for a single structural framework for the personality of the individual, the social collective also has a need for one, and due to the relationship between them they would be expected to be broadly similar. So, in this paper our primary interest lies in modelling pathology densities of social agencies having normative personalities. To do this we shall take the following steps. We shall adopt a cybernetic agency model that is reflective of Bandura’s (2006) ideas on the adaptive agency, and which arises from “living system” theory (Yolles, 2006). That personality can be represented as a system is not new (Pervin, 1990), but representing it as a living system is. Such an approach can respond to the needs of complexity and uncertainty, and embed features of adaptation and autonomous self-control. In the modelling approach that we shall take, personality controls derive from formative traits. It is a cybernetic personality theory which has a frame of reference that related to Maruyama’s mindscape theory, which unlike the classificational MBTI, is relational in nature (Maruyama, 2001). The relationship between classificational and relational within the context of personality has been considered by various authors (Baldwin, 1992; Mayer, 1995). For Baldwin (1992), relational schemas can be usefully used to explore cognitive structures and their regularities in patterns of interpersonal relatedness. For Mayer (1995), the power of relational approaches is partly exhibited through their ability to pose questions otherwise inaccessible through classificational approaches with respect to the development of personality and the connections between its components. Living system modelling approaches allow not only the classification of systems through their traits, but also the ability to make investigations through system dynamics by investigating the importance assigned to feed-forward and feed-back processes and the relations (and correlations) between processes. Myer also notes that relational approaches offers a meaning-structure for thinking about personality and its components, and that they also offer the possibility of the synthesis of a century’s work on personality components into a single representation of personality that is both multifaceted and whole.

While mindscape theory is essentially superior to classificational approaches, in Maruyama’s mindscape theory the development of personality types is even less transparent than are the types of MBTI. Nor do pathologies play a part in mindscape theory, a particular interest in this paper. Since both transparency and pathology will be requirements for an understanding of normative personality, we shall propose a new formative trait basis for mindscape theory. This will come from a study by Sagiv and Schwartz (2007) capable of explaining social agency behaviour, and which will result in a class of Sagiv-Schwartz mindscape theory that will be called mindset theory. We shall then in brief explore the spectrum of pathology densities, relating them to social contexts, and differentiating, for instance, between ethical and sociopathic organisations. This approach allows us to redefine the “mindscape concept” in slightly different broader terms than adopted by Maruyama. Adopting the idea of epistemological structures, Maruyama’s theory of mindscapes is cognitive in that it refers to the way in which people interpret and process information. Mindscapes exist as a set of distinct epistemological orthogonal typologies, and which can be constituted through a set of culturally connected dimensions that include cognitive uncorrelated bi-polar traits. Such a typology can be theoretically founded in a different way from that developed by Maruyama, by adopting bi-polar traits that can be derived from a coherent “living system” meta-model of a social agency having a “personality system”. We integrate empirical constructs from social psychology and social values research into the meta-model and transpose it into a mindscape framework. This allows one to visualize the set of personality types that can emerge, and further allows one to discuss typology formation from a coherent platform that can integrate different mindscape concepts. Such a meta-model will be useful for understanding the nested levels of investigations from top down (society, organizations, teams, individuals) and bottom-up (from individual personalities, through normative organisational personality, to socio- and economic-political personality at society level).

Aaron Gordon - Does Randomness Actually Exist?

Does randomness exist? Can we even fathom the question? This is an interesting article from Aaron Gordan at Pacific Standard.

Does Randomness Actually Exist?

By Aaron Gordon • August 25, 2014 

enigma-machine
An Enigma machine. (Photo: Wikimedia Commons)

Our human minds are incapable of truly answering that question.


All week long we’ll be posting stories about randomness and how poorly we tend to deal with it. Check back tomorrow for more.

Pick a number. Any number, one through 100. Got one? OK, so how did you pick it?

Humans are bad at creating and detecting randomness. Perceiving patterns has proven a great survival mechanism—the giant, spotted cats eat my children; this berry doesn’t make me sick—so we have evolved to be good at it. Perhaps too good. We misinterpret data all the time as a result of this desire for order. We believe that when a coin comes up heads five straight times, we are “due” for a tails, or we think that the stock market is predictable. It’s maybe unsurprising, then, that humans aren’t very good random number generators. And because of that, we’ve had to make some.

If you Google “Random Number Generators,” you’ll find several on the first page that are perfectly capable of mimicking a random process. After specifying a range, they will return a number. Do so 100 or 1,000 or 10,000 times, and you won’t find any discernible pattern to the results. Yet despite the name, the results are anything but random.

Computers are hyper-logical machines that can only follow specific commands. As explained by a BBC Radio broadcast from 2011, some of the random number generators you’ll find on Google follow something called the “Middle Squares” method: start with a seed number, which can be any number. Square that number. You’ll now have roughly twice as many digits. Take a few of the digits in the middle of that number and square that. Repeating this process is like shuffling a deck of cards. Still, if you know three basic pieces of information—the seed number, the number of digits taken from the middle of each square, and how many times the process will be repeated—you can calculate this supposedly “random” number every single time without fail.

Mathematicians have a word for this kind of randomness. They cleverly call it “pseudo-randomness”: the process passes statistical tests for randomness, yet the number itself is completely determined. On the BBC Radio broadcast, professor Colva Roney-Dougal of the University of St. Andrews says, “I can never prove that a sequence is random, I can only prove that it looks random and smells random.”

All of which brings us to this: Given the limits of human knowledge, how can we ever know if something is truly random?

A FEW ANCIENT THINKERS, known as Atomists, fathered a line of thought, which claims that, in fact, randomness doesn’t exist. The most deterministic among them, Democritus, believed the entire state of the universe could be explained through cause and effect. In other words, he was only interested in how the past dictated the present and future.

Once you learn about pseudo-randomness, it’s easy to see the world through Democritus’ eyes. Rolling dice isn’t random. Instead, the dice are governed by specific, mathematical laws, and if we knew the exact contours of the desk and the force applied to the dice, we could calculate which sides would come to rest facing upward. The same is true of shuffling cards. If we knew the exact height the cards were lifted, the exact force with which they were released, and the distance from each other, it’s completely feasible to calculate the order of the cards, time and time again. This is true for every game of chance, which are governed by Newtonian, or classical, physics. It all appears completely deterministic.

A lack of true randomness would be a huge problem, just like it was for the Germans during World War II with their revered but ultimately doomed Enigma enciphering machine. With its 150 quintillion different settings, many Allied cryptologists believed the code was unbreakable. Yet, because it was a mere matter of rotor settings and circuitry—or put simply, completely deterministic—the Allies were able to crack the code.

Since Newtonian physics has proven resistant to true randomness, cryptologists have since looked to quantum physics, or the rules that govern subatomic particles, which are completely different than Newtonian physics. Radioactive materials spontaneously throw off particles in a probabilistic manner, but the exact time when each particle will be discarded is inherently random. (We think.) So given a small window of time, the number of radioactive particles discarded can act as the seed for the random number generator.

Every time you buy something with a credit card, you’re relying on your information to be transmitted safely across a perfectly accessible network. This is where the difference between random and pseudo-random becomes vastly important. Pseudo-random patterns, like the ones created by the Enigma machine, are messages begging to be read. Random patterns are the cryptic ideal.

A company called PDH International is one of the patent-holders for Patent US6745217 B2, or “Random Number Generator Based on the Spontaneous Alpha-Decay,” the very process described above. PDH International, with an annual revenue of $10 to $25 million, specializes in the “fields of Privacy Protection, Authentication, Encryption and Electronic Document Protection.” PDH comes up with ways to safely encrypt data using true randomness from quantum physics.

BUT BACK TO THAT number you picked.

As with randomness, the more we learned about the precise nature of brain functions, we began to question whether free will was possible. If everything is the result of precise causal chains like the rolling of dice or shuffling of cards, some wondered how we can really be making genuine choices. However, as we’ve learned more about quantum physics, the possibility of genuine choice has been revitalized due to the break in the causal chain. In a way, quantum physics introduced a giant, unsolvable question mark, and question marks are good for free-will theorists. Ironically, quantum physics simultaneously undermines this line of thought, since randomness is bad for the idea that we are actually making rational choices.

So pick a number, any number. Maybe it is random after all.


Aaron Gordon is a freelance writer living in Washington, D.C. He also contributes to Sports on Earth, The New Yorker, Deadspin, and Slate.

Tuesday, August 26, 2014

Matthew Taylor: Beyond Belief – Towards a New Methodology of Change


This is an interesting article from Royal Society for the encouragement of Arts (RSA) CEO Matthew Taylor on the emerging demand for a participatory politics. I liked this quote: "beyonders want a model of change in which the public has the right and the responsibility to be the subject not the object." And this one, "beyonders tend to be decentralists seeking to devolve decision-making to the level at which the most constructive and responsive discourse between decision makers and citizens can occur."

This, of course, is happening more in Britain than in the States - we are content to watch the VMAs, count the seconds until Disney releases the next Star Wars film, and slowly kill ourselves with ignorance and laziness.

Beyond belief – towards a new methodology of change

August 24, 2014 by Matthew Taylor

An exciting and progressive new paradigm for purposive social change is emerging*. For want of a more positive descriptor, this can be called ‘beyond policy’. It has many positive things to say, but its starting point comprises a number of related critiques – some quite new, some very old – of traditional legislative or quasi-legislative decision-making.

One relatively new strand focuses on the problems such decision-making has with the complexity and pace of change in the modern world. For example, in their recent book ‘Complexity and the art of Public Policy’ David Colander and Roland Kupers write ‘The current policy compass is rooted in assumptions necessary half a century ago….while social and economic theory has advanced, the policy model has not. It is this standard policy compass that is increasingly derailing the policy discussion’. Old linear processes cannot cope with the ‘wicked problems’ posed by a complex world.

A second strand – most often applied to public service reform – argues that the relational nature of such services means that change cannot be done to people but must be continually negotiated with them, leaving as much room as possible for local discretion at the interface between public commissioner/provider and citizen/service user. The RSA identifies the key criterion for public service success as ‘social productivity’; the degree to which interventions encourage and enable people better to be able to contribute to meeting their own needs.

Design thinking provides another, rather elegant, stick with which to beat traditional policy methods. Here the contrast is between the schematic, inflexible, risk averse and unresponsive methods of the policy maker versus the pragmatic, risk taking, fast learning, experimental method of the designer. Across the world Governments local and national – including the UK with its recently established Policy Lab - are trying to bring the design perspective into decision-making (generally it promises lots of possibility at the margins but has proven hard to bring anywhere near the centre of power).

Connected to the design critique the rise of what David Price and Dom Potter among others refer to as ‘open’ organisations challenges many aspects of the technocratic model of expert policy makers ensconced in Whitehall or Town Hall. When transparency is expected and secrecy ever harder to maintain and when innovation is vital but increasingly being seen to take place at the fuzzy margins of organisations, then we are all potential policy experts.

A final stand worth mentioning (I am sure the are others) is more ideological and idealistic. Following the civic republican tradition, beyonders want a model of change in which the public has the right and the responsibility to be the subject not the object. There is, for example, the distinction made many years ago by historian Peter Clarke between ‘moral’ and ‘mechanical’ traditions in the British labour movement. The former (favoured by ‘beyonders’) is concerned with embedding progressive values in the hearts and minds of citizens who will themselves build a better society, while the latter is focused on winning power so that those in authority can mould a fairer better world according to their grand plan.

The dictionary definition of policy is: ‘a course or principle of action adopted or proposed by an organisation or individual’. So, echoing Bertrand Russell’s problem with the set that contains all sets, the most obvious objection to ‘beyond policy’ is that it is, well….a policy. ‘Beyonders’ are not anarchists. The issue here is not whether people in power should make decisions; after all, it is because they are judged to be likely to make good decisions that they have been vested with authority. The differences between the ‘traditional’ and ‘beyond’ policy camps are in practice ones of degree. Often the best traditional policy turns out to have used versions of the new methods. But that doesn’t mean the differences between the approaches aren’t important and often pretty obvious.

Beyonders put greater emphasis on citizens not only engaging with decisions but being part of their implementation. We recognise the importance of clear and explicit goals and shared metrics, but rather than setting these in stone at the outset see them emerging from a conversation authentically led and openly convened using a new style of dispersed and shared authority.

Beyonders are likely to see civic mobilisation as preceding and possibly being an alternative to legislative policy whereas traditionalists will tend to see mobilisation as something that happens after policy has been agreed by experts. Beyonders tend, at last at the outset, to be more pragmatic and flexible about the timeframe over which major change can occur – depending as it does on public engagement and consent – whereas traditionalists pride themselves (before a fall) on their demanding and fixed timetables. And, of course, beyonders tend to be decentralists seeking to devolve decision-making to the level at which the most constructive and responsive discourse between decision makers and citizens can occur.

Another reasonable challenge to the new paradigm is that it can’t be equally applied to all areas of policy. When it comes, for example, to military engagement or infrastructure investment, surely we need clear decisions made at the top and then imposed regardless?

Yes, even here the case is not clear-cut. One of the reasons we sometimes get infrastructure wrong in areas like transport and energy is that the policy making establishment (not just the law makers but those paid to advise and influence them) prefer big ticket schemes (which tend also to generate big ticket opposition) to more evolutionary, innovative or local solutions. And as the military and police know, without winning hearts and minds most martial solutions fail to sustain. A topical example is the way the terrorist threat in the UK is now less to do with organised conspiracy (requiring sophisticated and centralised surveillance) and more to do with disturbed and alienated youth who need to be identified and engaged with at a community level.

Perhaps the biggest challenge to the beyond policy paradigm is that it requires fundamental changes not just in the way we do policy, but in how we think about politics, accountability and social responsibility. The solidity of traditional policy making is contained within a wider system which cannot easily contend with the much more fluid material of ‘beyond policy’. When, for example, I tell politicians there their most constructive power may lie not in passing laws, imposing regulations or even spending money but on convening new types of conversation, they react like body builders who have asked to train using only cuddly toys.

Reflecting the way we tend to think about the world, the beyonders’ revolution requires action on several levels. Innovation shows us a better way of making change that lasts. See for example the work of Bruce Katz and Jennifer Bradley of the Brookings Institute on the advances made by US metros, often based on the convening power of the city mayor. Included in the ranks of a new generation of beyond policy practitioners are community organisers, ethnographers, big data analysts and service designers – they can all tell you why traditional policy making is a problem and they rarely see it as the best way to find solutions. There are also more academics and respected former policy makers (like former Canadian cabinet secretary Jocelyne Bourgon) helping to provide conceptual clarity and professional credibility to the project.

‘Beyond policy is a movement in progress, but in recognising its flaws and gaps we mustn’t forget the traditional system’s glaring inadequacies or that the political class is still, on the whole, clinging tight to it: Over the next ten months our political parties will offer manifestos full of old style policy to be enacted through an increasingly unreal model of social change.

If the problem was simply that the policies and pledges were unlikely to be enacted it would be bad enough. It is worse. Politicians feel they pay a high price for broken promises so, if elected, they demand that the machine try to ‘deliver’ regardless of whether the policy makes any sense or of any learning that points to the need to change course. The result is often distorted priorities and perverse outcomes along with gaming, demoralisation and cynicism among public servants. No chief executive of a large corporation (and none are as a large as the UK government) would dream of tying themselves in detail to a plan that is supposed to last the best part of five years regardless of unpredictable events. But that is exactly what we will apparently command our politicians – facing much more complex tasks and challenges – to do in ten months time.

Surely now, before another Government is elected on a false and damaging prospectus, it’s time to move beyond convention and have a grown up conversation about how society changes for good and how politician can best make a positive difference.

* This is an edited version of an article I have written for the News South Wales Institute of Public Administration

Matthew Taylor became Chief Executive of the RSA in November 2006. Prior to this appointment, he was Chief Adviser on Political Strategy to the Prime Minister.