Showing posts with label complex system. Show all posts
Showing posts with label complex system. Show all posts

Thursday, February 06, 2014

Science in a Complex World: Declassification of Data Important to Future Science

The Santa Fe Institute publishes a series of articles from time to time in the Santa Fe New Mexican on complexity science and complex systems that are critical to human society — economies, ecosystems, conflict, disease, human social institutions and the global condition.

Science in a Complex World: Declassification of data important to future science


Posted: Sunday, February 2, 2014
Eric Rupley

Did you know top-secret intelligence by the U.S. government has played a key role in helping scientists understand how human societies and ecosystems have evolved over the last 10,000 years.


The catch, of course, is that this has happened only after the declassification of the intelligence.

I am an archaeologist and anthropologist at the Santa Fe Institute. With my colleagues, I study the long-term evolution of human societies, seeking the shared underlying principles that are responsible for the emergence of complex social, political and economic organization. To do this, we need two things: ideas about how things happened and data to evaluate those ideas. The evaluation of ideas with data leads to new ideas; this is the process that leads to scientific discovery.
Here is a story about a discovery in which which declassified top-secret data was critical. We know from archaeology that the first large-scale societies with a differentiated labor force, record-keeping bureaucracies and political systems that united communities beyond kinship emerged on the planet at least 6,000 years ago. This happened first in Mesopotamia — not just the land between the Tigris and Euphrates rivers, but all the lands drained by them in what is now southern Turkey, western Iran, Syria and Iraq.

The evolution of these first economies, we now know, occurred across the entire region. We didn’t always know this. Twenty years ago, we used to think the evolution of the earliest economies occurred only in a restricted area of southern Mesopotamia, mostly south of Baghdad. The area, called the “heartland of cities” by the eminent archaeologist (and former Santa Fe Institute trustee) Robert McCormick Adams, requires irrigation for agriculture.

New ways of thinking and new evidence have changed our view. Initially, we envisioned a core area of initial social innovation, while regions outside the core were “under-developed” and only passively participated in the creation of the first complex societies.

The mechanisms for the creation of a centralized bureaucracy were thought to have stemmed from the environmental characteristics of the core: In the earlier part of this century, some archaeologists believed it arose from the need to manage irrigation. But when it became clear that complex irrigation systems do not require centralized control, the irrigation hypothesis was replaced by other ideas about how communities in the region evolved. One idea was that a lack of material resources forced centralized trade and, thus, centralized bureaucracy.

Over the last 15 years, however, new information has been recovered that is leading us to an understanding that the origins of complex economies were not as restricted in location or as external in cause; almost all of Mesopotamia was locally involved in the evolution of a more complex regional economy. This new view leads us to new models of how the change occurred, and these new models emphasize internal forces over external conditions. In turn, this new understanding allows us to more effectively compare the evolution of civilization across the planet, identifying key evolutionary phenomena shared among human societies globally.

And here’s the crux of this story: In part, this discovery was made possible by one of the most closely held intelligence secrets of the Cold War. The Corona, Argon and Lanyard programs, initiated by the U.S. government in the 1950s, launched the first spy satellites. By the late 1960s, the systems were able to collect imagery with a ground resolution of less than six feet — good enough to identify small trees and large vehicles.

The remarkable half-century-old images contain detail almost as good as state-of-the-art digital images now available from commercial satellites. In addition to the Cold War mission for which they were designed (as dramatized in the 1968 movie Ice Station Zebra), the space photographs incidentally recorded traces of past human settlements that have survived for 10,000 years — crucial evidence about how we came to live in the world we now inhabit. In the last few decades, we have lost much of this landscape to industrial agriculture and mechanized land-leveling.

In 1995, a remarkable thing happened when this closely held secret of the government was partially declassified. The story of how this declassification occurred is only partly known. (See, for example, the work of authors Dwayne Day or Robert McDonald.) One undocumented story involves conversations between the archaeologist mentioned at the start of this piece, Adams, who was then secretary of the Smithsonian, and James Woolsey, then the U.S. director of central intelligence and a regent of the Smithsonian. Whatever the background of the declassification, the last Corona camera was given to the Smithsonian, a presidential order was signed on Feb. 22, 1995, and the imagery from the the missions was transferred to the National Archives. The United States Geological Survey took responsibility for releasing the data publicly.

The declassified images were of immediate use to archaeologists working in the Near East because they preserved information about a lost landscape. For the first time, we were able to see the land surface before the destruction of the sites we sought to investigate. (See, for example, the Corona Atlas of the Near East, a project by colleague Jesse Casana of the University of Arkansas, Fayetteville.)

Archaeologists were, in some instances, able to visit the remains of the damaged sites and systematically recover traces of the past cultures that inhabited the region thousands of years ago. What we’ve pieced together from excavation and from archaeological survey aided by the declassified images has helped revise our understanding of how our first complex societies and differentiated economies came to be. The declassified data from Corona have helped not just archaeologists; glaciologists, geologists and ecologists all have used the imagery to monitor how our world has changed.

Unfortunately, for reasons that remain unclear, the initial declassification tapered off, despite its broad scientific success. Yet, in these days of post-Edward Snowden debate over sweeping government information collection, we should keep in mind the importance of declassification to scientists. This is not a proposal to declassify everyone’s metadata. But we are now well beyond space photography and into an era of Big Data (as discussed in past articles in this newspaper by the Santa Fe Institute’s Chris Wood and Simon DeDeo).

While we can, and probably should, limit contemporary collection, part of the debate as we reassess our national surveillance policies should be a consideration of the future scientific utility of archival collections: Should we, in the future, release previously collected “legacy” data in a manner that both protects privacy and helps scientists understand the collective patterns of human interaction that govern our daily lives? If so, what should be the design of a curation policy that would balance privacy concerns and make full utility of what we have already gathered?

This is a challenging problem that pits citizen privacy and limits on government against an almost infinite space of future security concerns and what will surely be vastly improved analytical methods leading to greater utility of legacy data. While its purpose remains unclear, we might surmise from the construction of the so-called Utah Data Center (a government facility possibly designed to curate the digital collections of our intelligence community) that the community understands the future utility of currently collected data. But given the benefits of declassification and our concerns for privacy, the questions of if, when and how to release this data are important to us all. Asking these questions is in keeping with the Open Government Initiative signed by President Barack Obama on the first day of his administration.

We don’t yet know what shape an overall declassification policy should take, but we do know this: The data we collect now, on ourselves, will provide the digital archaeologists and historians of the future a window into how we operate as a society. Toward that gift — the understanding of the drivers of change in human society — we can make a direct contribution by taking into account the importance of future public research on legacy intelligence collections.


~ Eric Rupley is an anthropological archaeologist at the Santa Fe Institute and a doctoral candidate at the University of Michigan’s Museum of Anthropology. His research analyzes cross-cultural, region-scale data on past human activities and settlement systems to explore how new forms of social organization emerge. His primary field work has been in Syria and Turkey. He can be reached through email at erupley@santafe.edu.

About the series

The Santa Fe Institute is a private, nonprofit, independent research and education center founded in 1984, where top researchers from around the world gather to study and understand the theoretical foundations and patterns underlying the complex systems that are most critical to human society — economies, ecosystems, conflict, disease, human social institutions and the global condition. This column is part of a series written by researchers at the Santa Fe Institute and published in The New Mexican.

Tuesday, January 28, 2014

Melanie Mitchell - How Can the Study of Complexity Transform Our Understanding of the World?

From Big Questions Online, Melanie Mitchell (Professor of Computer Science at Portland State University, and External Professor and Member of the Science Board at the Santa Fe Institute) offers a nice and very accessible overview of how the study of complex systems can help us better make sense of our world.

How Can the Study of Complexity Transform Our Understanding of the World?


By Melanie Mitchell
January 20, 2014


image: Stephen Hopkins

In 1894, the physicist and Nobel laureate Albert Michelson declared that science was almost finished; the human race was within a hair’s breadth of understanding everything:

It seems probable that most of the grand underlying principles have now been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice.

Bold and heady predictions like this often seem destined to topple, and, to be sure, the world of physics was soon shaken by the revolutions of relativity and quantum mechanics.

But as the 20th century unfolded, it turned out to be the phenomena closest to our own human scale— biology, social science, economics, politics, among others—that have most notably eluded explanation by any grand principles. The deeper we dig into the workings of ourselves and our society, the more unexpected complexity we find. Fittingly, it was in the 20th century that science began to bridge disciplinary boundaries in order to search for principles of complexity itself.

What is Complexity?

The “study of complexity” refers to the attempt to find common principles underlying the behavior of complex systems—systems in which large collections of components interact in nonlinear ways. Here, the term nonlinear implies that the system can’t be understood simply by understanding its individual components; nonlinear interactions cause the whole to be “more than the sum of its parts.”

Complex systems scientists try to understand how such collective sophistication can come about, whether it be in ant colonies, cells, brains, immune systems, social groups, or economic markets. People who study complexity are intrigued by the suggestive similarities among these disparate systems. All these systems exhibit self-organization: the system’s components organize themselves to act as a coherent whole without the benefit of any central or outside “controller”. Complex systems are able to encode and process information with a sophistication that is not available to the individual components. Complex systems evolve—they are continually changing in an open-ended way, and they learn and adapt over time. Such systems defy precise prediction, and resist the kind of equilibrium that would make them easier for scientists to understand.

Transforming Our Understanding

Of course all important scientific discoveries transform our understanding of nature, but I think that the study of complexity goes a step further: it not only helps us understand important phenomena, but changes our perspective on how to think about nature, and about science itself.

Here are a few examples of the surprising, perspective-changing discoveries of Complex Systems science. (If these don’t seem so surprising to you, it is because your perspective has already been changed by the sciences of complexity!)

Simple rules can yield complex, unpredictable behavior


Why can’t we seem to forecast the weather farther out than a week or so? Why is it so hard to project yearly variation in fishery populations? Why can’t we foresee stock market bubbles and crashes? In the past it was widely assumed that such phenomena are hard to predict because the underlying processes are highly complex, and that random factors must play a key role. However, Complex Systems science—especially the study of dynamics and chaos—have shown that complex behavior and unpredictability can arise in a system even if the underlying rules are extremely simple and completely deterministic. Often, the key to complexity is the iteration over time of simple, though nonlinear, interaction rules among the system’s components. It’s still not clear if unpredictability in the weather, stock market, and animal populations is caused by such iteration alone, but the study of chaos has shown that it’s possible.

More is Different

Above I reiterated the old saw, “the whole is more than the sum of its parts”. The physicist Phil Anderson coined a better aphorism: he noted that a key lesson of complexity is that “more is different”.

Ant colonies are a great example of this. As the ecologist Nigel Franks puts it, “The solitary army ant is behaviorally one of the least sophisticated animals imaginable...If 100 army ants are placed on a flat surface, they will walk around and around in never decreasing circles until they die of exhaustion.” Yet put half a million of them together and the group as a whole behaves as a hard-to-predict “superorganism” with sophisticated, and sometimes frightening, “collective intelligence”. More is different.

Similar stories can be told for neurons in the brain, cells in the immune system, creativity and social movements in cities, and agents in market economies. The study of complexity has shown that when a system’s components have the right kind of interactions, its global behavior—the system’s capacity to process information, to make decisions, to evolve and learn—can be powerfully different from that of its individual components.

Network Thinking

In the early 2000s, the complete human genome was sequenced. While the benefits to science were enormous, some of the predictions made by prominent scientists and others had a Michelsonian flavor (see first paragraph). President Clinton echoed the widely held view that the Human Genome Project would “revolutionize the diagnosis, prevention and treatment of most, if not all, human diseases.” Indeed, many scientists believed that a complete mapping of human genes would provide a nearly complete understanding of how genetics worked, which genes were responsible for which traits, and this would guide the way for revolutionary medical discoveries and targeted gene therapies.

Now, more than a decade later, these predicted medical revolutions have not yet materialized. But the Human Genome Project, and the huge progress in genetics research that followed, did uncover some unexpected results. First, human genes (DNA sequences that code for proteins) number around 21,000—much fewer than anyone thought, and about the same number as in mice, worms, and mustard plants. Second, these protein-coding genes make up only about 2% of our DNA. Two mysteries emerge: If we humans have comparatively so few genes, where does our complexity come from? And as for that 98% of non-gene DNA, which in the past was dismissively called "junk DNA", what is its function?

What geneticists have learned is that genetic elements in a cell, like ants in a colony, interact nonlinearly so as to create intricate information-processing networks. It is the networks, rather than the individual genes, that shape the organism. Moreover, and most surprising: the so-called “junk” DNA is key to forming these networks. As biologist John Mattick puts it, “The irony...is that what was dismissed as junk because it wasn’t understood will turn out to hold the secret of human complexity.”

Information-processing networks are emerging as a core organizing principle of biology. What used to be called “cellular signaling pathways” are now “cellular information processing networks.” New research on cancer treatments is focused not on individual genes but on disrupting the cellular information processing networks that many cancers exploit. Some types of bacteria are now known to communicate via “quorum sensing” networks in order to collectively attack a host; this discovery is also driving research into network-specific treatment of infections.

Over the last two decades an interdisciplinary science of networks has emerged, and has developed insights and research methods that apply to networks ranging from genetics to economics. Network thinking is the area of complex systems that has perhaps done the most to transform our understanding of the world.

Non-Normal is the New Normal

In 2009, Nobel Prize-winning economist Paul Krugman said, “Few economists saw our current crisis coming, but this predictive failure was the least of the field’s problems. More important was the profession’s blindness to the very possibility of catastrophic failures in a market economy.” At least part of this “blindness” was due to the reliance on risk models based on so-called normal distributions.

Figure 1: (a) A hypothetical normal distribution of the probability of financial gain or loss under trading. (b) A hypothetical long-tailed distribution, showing only the loss side. The “tail” of the distribution is the far right-hand side. The long-tailed distribution predicts a considerably higher probability of catastrophic loss than the normal distribution.
The term normal distribution refers to the familiar bell curve. Economists and finance professionals often use such distributions to model the probability of gains and risk of losses from investments. Figure 1(a) shows a hypothetical normal distribution of risk. I’ve marked a hypothetical “catastrophic loss” on the graph. You can see that, given this distribution of risk, the probability of such a loss would be very near zero. Less probable, maybe, than a lightning strike right where you’re standing. Something you don’t have to worry about. Unless the model is wrong.

The study of complexity has shown that in nonlinear, highly networked systems, a more accurate estimation of risk would be a so-called “long-tailed” distribution. Figure 1(b) shows a hypothetical long-tailed distribution of risk (here, only the “loss” side is shown). The longer non-zero “tail” (far right-hand side) of this distribution shows that the probability of a catastrophic loss is significantly higher than for a system obeying a normal distribution. If risk models in 2008 had employed long-tailed rather than normal distributions, the possibility of an “extreme event”—here, “catastrophic loss”—would have be judged more likely.

Because long-tailed distributions are now known to be signatures of complex networks, our growing understanding of such networks implies that risk models need to be rethought in many areas, ranging from disease epidemics to power grid failures; from financial crises to ecosystem collapses. The technologist Andreas Antonopoulos puts it succinctly: “The threat is complexity itself”.

Is Complexity a New Science?

“The new science of complexity” has become a catchphrase in some circles. Google reports nearly 87,000 hits on this phrase. But how “new” is the study of complexity? And to what extent is it actually a “science”?

The current scientific efforts centered around complexity have several antecedents. The Cybernetics movement of the 1940s and 50s, the General System Theory movement of the 1960s, and the more recent advent of Systems Biology, Systems Engineering, Systems Science, etc., all share goals with Complex Systems science: finding general principles that explain how system-level behavior emerges from interactions among lower-level components. The different movements capture different (though sometimes overlapping) communities and different foci of attention.

To my mind, Complexity refers not to a single science but rather to a community of scientists in different disciplines who share interdisciplinary interests, methodologies, and a mindset about how to address scientific problems. Just what this mindset consists of is hard to pin down. I would say it includes, first, the assumption that understanding complexity will require integrating concepts from dynamics, information, statistical physics, and evolution. And second, that computer modeling is an essential addition to traditional scientific theory and experimentation. As yet, Complexity is not a single unified science; rather, to paraphrase William James, it is still “the hope of a science”. I believe that this hope has great promise.

In our era of Big Data, what Complexity potentially offers is “Big Theory”—a scientific understanding of the complex processes that produce the data we are drowning in. If the field’s past contributions are any indication, Complexity’s sought-after big theory will even more profoundly transform our understanding of the world.

It’s something to look forward to. In the words of playwright Tom Stoppard: “It’s the best possible time to be alive, when almost everything you thought you knew is wrong.”

Discussion Questions


1. Can you identify any ways in which your own way of thinking has been changed by Complex Systems science?

2. The discussion above stated that when systems get too intricately networked, “the threat is complexity itself”. The network scientist Duncan Watts suggested that the notion “too big to fail” should be rethought as “too complex to exist.” Should we worry about the world becoming too complex? If so, what should we do about it?

3. To what extent do you think the ideas of complex systems are new? What would it take to create a unified science of complexity?

Resources and Further Reading:


http://complexityexplorer.org
  • Anderson, P. W. More is different. Science, 177 (4047), 1972, 393-396.
  • Bettencourt, L. M., Lobo, J., Helbing, D., Kühnert, C., & West, G. B. (2007). Growth, innovation, scaling, and the pace of life in cities. Proceedings of the National Academy of Sciences, 104(17), 7301-7306.
  • Franks, N. R. Army ants: A collective intelligence. American Scientist, 77(2), 1989, 138-145.
  • Hayden, E. C. Human genome at ten: Life is complicated. Nature, 464, 2010, 664-667.
  • Krugman, P. How did economists get it so wrong? New York Times, September 2, 2009.
  • Miller, J. H. and Page, S. E. Complex Adaptive Systems. Princeton University Press, 2007.
  • Mitchell, M. Complexity: A Guided Tour. Oxford University Press, 2009
  • Newman, M. E. J. Networks: An Introduction. Oxford University Press, 2009.
  • Watts, D. Too complex to exist. Boston Globe, June 14, 2009.
  • West, G. Big data needs a big theory to go with it. Scientific American, May 15, 2013.

Saturday, December 07, 2013

A Neuroscientist's Radical Theory of How Networks Become Conscious (WIRED U.K.)

File:RyoanJi-Dry garden.jpg
In the Japanese art of the rock garden, the artist must be aware 
of the rocks' "ishigokoro" (‘heart,’ or ‘mind’)


Neuroscientist Christof Koch, chief scientific officer at the Allen Institute for Brain Science, has progressively become less hyper-rational in his understanding of consciousness and more Buddhist - and it's not clear yet if this is a good thing.

His newest pronouncement is his belief in panpsychism, defined below by Wikipedia:
In philosophy, panpsychism is the view that mind or soul (Greek: ψυχή) is a universal feature of all things, and the primordial feature from which all others are derived. The panpsychist sees him or herself as a mind in a world of minds.

Panpsychism is one of the oldest philosophical theories, and can be ascribed to philosophers like Thales, Plato, Spinoza, Leibniz and William James. Panpsychism can also be seen in eastern philosophies such as Vedanta and Mahayana Buddhism. During the 19th century, Panpsychism was the default theory in philosophy of mind, but it saw a decline during the latter half of the 20th century with the rise of logical positivism.[1] The recent interest in the hard problem of consciousness has once again made panpsychism a mainstream theory.
 Says Koch, "I argue that we live in a universe of space, time, mass, energy, and consciousness arising out of complex systems." This sounds like emergence to me, and less like panpsychism, which is the belief that mind/consciousness is inherent in the universe. I'm more likely to accept emergence as an explanation of consciousness that avoids issues of duality.

See the Stanford Encyclopedia of Philosophy entry on panpsychism for a better understanding of the arguments for and against, as well as its history in philosophy.

A neuroscientist's radical theory of how networks become conscious


15 November 13
by Brandon Keim


A map of neural circuits in the human brain - Human Connectome Project

It's a question that's perplexed philosophers for centuries and scientists for decades: where does consciousness come from? We know it exists, at least in ourselves. But how it arises from chemistry and electricity in our brains is an unsolved mystery.

Neuroscientist Christof Koch, chief scientific officer at the Allen Institute for Brain Science, thinks he might know the answer. According to Koch, consciousness arises within any sufficiently complex, information-processing system. All animals, from humans on down to earthworms, are conscious; even the internet could be. That's just the way the universe works.

"The electric charge of an electron doesn't arise out of more elemental properties. It simply has a charge," says Koch. "Likewise, I argue that we live in a universe of space, time, mass, energy, and consciousness arising out of complex systems."

What Koch proposes is a scientifically refined version of an ancient philosophical doctrine called panpsychism -- and, coming from someone else, it might sound more like spirituality than science. But Koch has devoted the last three decades to studying the neurological basis of consciousness. His work at the Allen Institute now puts him at the forefront of the BRAIN Initiative, the massive new effort to understand how brains work, which will begin next year.

Koch's insights have been detailed in dozens of scientific articles and a series of books, including last year's Consciousness: Confessions of a Romantic Reductionist. Wired talked to Koch about his understanding of this age-old question.

Wired: How did you come to believe in panpsychism?

Christof Koch: I grew up Roman Catholic, and also grew up with a dog. And what bothered me was the idea that, while humans had souls and could go to heaven, dogs were not supposed to have souls. Intuitively I felt that either humans and animals alike had souls, or none did. Then I encountered Buddhism, with its emphasis on the universal nature of the conscious mind. You find this idea in philosophy, too, espoused by Plato and Spinoza and Schopenhauer, that psyche -- consciousness -- is everywhere. I find that to be the most satisfying explanation for the universe, for three reasons: biological, metaphysical and computational.

Wired: What do you mean?

Koch: My consciousness is an undeniable fact. One can only infer facts about the universe, such as physics, indirectly, but the one thing I'm utterly certain of is that I'm conscious. I might be confused about the state of my consciousness, but I'm not confused about having it. Then, looking at the biology, all animals have complex physiology, not just humans. And at the level of a grain of brain matter, there's nothing exceptional about human brains.

Only experts can tell, under a microscope, whether a chunk of brain matter is mouse or monkey or human -- and animals have very complicated behaviours. Even honeybees recognise individual faces, communicate the quality and location of food sources via waggle dances, and navigate complex mazes with the aid of cues stored in their short-term memory. If you blow a scent into their hive, they return to where they've previously encountered the odor. That's associative memory. What is the simplest explanation for it? That consciousness extends to all these creatures, that it's an imminent property of highly organised pieces of matter, such as brains.

Wired: That's pretty fuzzy. How does consciousness arise? How can you quantify it?

Koch: There's a theory, called Integrated Information Theory, developed by Giulio Tononi at the University of Wisconsin, that assigns to any one brain, or any complex system, a number -- denoted by the Greek symbol of Φ -- that tells you how integrated a system is, how much more the system is than the union of its parts. Φ gives you an information-theoretical measure of consciousness. Any system with integrated information different from zero has consciousness. Any integration feels like something

It's not that any physical system has consciousness. A black hole, a heap of sand, a bunch of isolated neurons in a dish, they're not integrated. They have no consciousness. But complex systems do. And how much consciousness they have depends on how many connections they have and how they're wired up.

Wired: Ecosystems are interconnected. Can a forest be conscious?

Koch: In the case of the brain, it's the whole system that's conscious, not the individual nerve cells. For any one ecosystem, it's a question of how richly the individual components, such as the trees in a forest, are integrated within themselves as compared to causal interactions between trees.

The philosopher John Searle, in his review of Consciousness, asked, "Why isn't America conscious?" After all, there are 300 million Americans, interacting in very complicated ways. Why doesn't consciousness extend to all of America? It's because integrated information theory postulates that consciousness is a local maximum. You and me, for example: we're interacting right now, but vastly less than the cells in my brain interact with each other. While you and I are conscious as individuals, there's no conscious Übermind that unites us in a single entity. You and I are not collectively conscious. It's the same thing with ecosystems. In each case, it's a question of the degree and extent of causal interactions among all components making up the system.

Wired: The internet is integrated. Could it be conscious?

Koch: It's difficult to say right now. But consider this. The internet contains about 10 billion computers, with each computer itself having a couple of billion transistors in its CPU. So the internet has at least 10^19 transistors, compared to the roughly 1000 trillion (or quadrillion) synapses in the human brain. That's about 10,000 times more transistors than synapses. But is the internet more complex than the human brain? It depends on the degree of integration of the internet.
 
For instance, our brains are connected all the time. On the internet, computers are packet-switching. They're not connected permanently, but rapidly switch from one to another. But according to my version of panpsychism, it feels like something to be the internet -- and if the internet were down, it wouldn't feel like anything anymore. And that is, in principle, not different from the way I feel when I'm in a deep, dreamless sleep.

Wired: Internet aside, what does a human consciousness share with animal consciousness? Are certain features going to be the same?

Koch: It depends on the sensorium [the scope of our sensory perception -ed.] and the interconnections. For a mouse, this is easy to say. They have a cortex similar to ours, but not a well-developed prefrontal cortex. So it probably doesn't have self-consciousness, or understand symbols like we do, but it sees and hears things similarly.

In every case, you have to look at the underlying neural mechanisms that give rise to the sensory apparatus, and to how they're implemented. There's no universal answer.

Wired: Does a lack of self-consciousness mean an animal has no sense of itself?

Koch: Many mammals don't pass the mirror self-recognition test, including dogs. But I suspect dogs have an olfactory form of self-recognition. You notice that dogs smell other dog's poop a lot, but they don't smell their own so much. So they probably have some sense of their own smell, a primitive form of self-consciousness. Now, I have no evidence to suggest that a dog sits there and reflects upon itself; I don't think dogs have that level of complexity. But I think dogs can see, and smell, and hear sounds, and be happy and excited, just like children and some adults.

Self-consciousness is something that humans have excessively, and that other animals have much less of, though apes have it to some extent. We have a hugely developed prefrontal cortex. We can ponder.

Wired: How can a creature be happy without self-consciousness?

Koch: When I'm climbing a mountain or a wall, my inner voice is totally silent. Instead, I'm hyperaware of the world around me. I don't worry too much about a fight with my wife, or about a tax return. I can't afford to get lost in my inner self. I'll fall. Same thing if I'm traveling at high speed on a bike. It's not like I have no sense of self in that situation, but it's certainly reduced. And I can be very happy.

Wired: I've read that you don't kill insects if you can avoid it.

Koch: That's true. They're fellow travelers on the road, bookended by eternity on both sides.

Wired: How do you square what you believe about animal consciousness with how they're used in experiments?

Koch: There are two things to put in perspective. First, there are vastly more animals being eaten at McDonald's every day. The number of animals used in research pales in comparison to the number used for flesh. And we need basic brain research to understand the brain's mechanisms. My father died from Parkinson's. One of my daughters died from Sudden Infant Death Syndrome. To prevent these brain diseases, we need to understand the brain -- and that, I think, can be the only true justification for animal research. That in the long run, it leads to a reduction in suffering for all of us. But in the short term, you have to do it in a way that minimises their pain and discomfort, with an awareness that these animals are conscious creatures.

Wired: Getting back to the theory, is your version of panpsychism truly scientific rather than metaphysical? How can it be tested?

Koch: In principle, in all sorts of ways. One implication is that you can build two systems, each with the same input and output -- but one, because of its internal structure, has integrated information. One system would be conscious, and the other not. It's not the input-output behavior that makes a system conscious, but rather the internal wiring.

The theory also says you can have simple systems that are conscious, and complex systems that are not. The cerebellum should not give rise to consciousness because of the simplicity of its connections. Theoretically you could compute that, and see if that's the case, though we can't do that right now. There are millions of details we still don't know. Human brain imaging is too crude. It doesn't get you to the cellular level.

The more relevant question, to me as a scientist, is how can I disprove the theory today. That's more difficult. Tononi's group has built a device to perturb the brain and assess the extent to which severely brain-injured patients -- think of Terri Schiavo -- are truly unconscious, or whether they do feel pain and distress but are unable to communicate to their loved ones. And it may be possible that some other theories of consciousness would fit these facts.

Wired: I still can't shake the feeling that consciousness arising through integrated information is -- arbitrary, somehow. Like an assertion of faith.

Koch: If you think about any explanation of anything, how far back does it go? We're confronted with this in physics. Take quantum mechanics, which is the theory that provides the best description we have of the universe at microscopic scales. Quantum mechanics allows us to design MRI and other useful machines and instruments. But why should quantum mechanics hold in our universe? It seems arbitrary! Can we imagine a universe without it, a universe where Planck's constant has a different value? Ultimately, there's a point beyond which there's no further regress. We live in a universe where, for reasons we don't understand, quantum physics simply is the reigning explanation.

With consciousness, it's ultimately going to be like that. We live in a universe where organised bits of matter give rise to consciousness. And with that, we can ultimately derive all sorts of interesting things: the answer to when a fetus or a baby first becomes conscious, whether a brain-injured patient is conscious, pathologies of consciousness such as schizophrenia, or consciousness in animals. And most people will say, that's a good explanation.

If I can predict the universe, and predict things I see around me, and manipulate them with my explanation, that's what it means to explain. Same thing with consciousness. Why we should live in such a universe is a good question, but I don't see how that can be answered now.

Sunday, May 05, 2013

States of Complexity - Santa Fe Institute Bulletin, April 2013, Vol. 27


States of Complexity

* Click to view digital version ONLINE
* Download PDF (3.3 MB)

In this issue

Perspectives: Improbable Institutions
By SFI Professor Sam Bowles
How likely was it for these independently formed social systems to resemble each other so closely?

States of Complexity
By Larry O'Hanlon
An SFI research project seeks to reveal why and how the state emerged in human societies.

Imagining Social Complexity
By SFI Omidyar Fellow Scott Ortman
How does the human capacity for metaphor help us shape new social structures?


Saturday, April 13, 2013

Breakthrough - Transparent Brain Imaging


This is a huge breakthrough in brain imaging, as reported in Nature earlier this week. The whole article is available as a PDF online. Eventually, we figure out to do this in a living body, so that we can't "real" images of the living, thinking brain.

Transparent Brain Imaging Will Accelerate Research 10 to 100 Times

by BIG THINK EDITORS
APRIL 11, 2013


The world of neuroscience is abuzz with the news that a new technique has been developed to study brain anatomy in mice. By removing the brain and treating it with chemicals, researchers are able to obtain a transparent view.

This advance was made by the bioengineering lab of Dr. Karl Deisseroth at Stanford and reported in the journal Nature yesterday. "Obtaining high-resolution information from a complex system, while maintaining the global perspective needed to understand system function, represents a key challenge in biology," the scientists wrote.

Their answer to this challenge is called CLARITY, which uses chemicals to transform intact brain tissue into a form that is optically "transparent and macromolecule-permeable."

To illustrate this breakthrough, Dr. Deisseroth's team released two videos. One shows "a flythrough" of a mouse brain using a fluorescent imaging technique. The second shows a 3D view of a mouse brain's memory hub, or hippocampus.

As the scientists note, existing methods require making hundreds of thin slices to the brain, and most crucially, hinders scientists' ability to analyze intact components in relation to each other.

So how significant is this development? "It's exactly the technique everyone's been waiting for," Terry Sejnowski of the Salk Institute told the Associated Press, estimating it will speed up brain anatomy research "by 10 to 100 times."

Watch the CLARITY demo videos here:

 
Image courtesy of Shutterstock