Showing posts with label knowledge. Show all posts
Showing posts with label knowledge. Show all posts

Saturday, January 18, 2014

The Crumbling Ancient Texts That May Hold Life-Saving Cures


Somehow I did not know or remember that Timbuktu was a real place . . . apparently it's time to come out from beneath my rock. Be that as it may, Nautilus posted this very cool article about some of the priceless books that date from centuries back that were housed in Timbuktu.

The Crumbling Ancient Texts That May Hold Life-Saving Cures


Posted By Amy Maxmen on Jan 14, 2014


A page from a Timbuktu manuscript. Amy Maxmen

Seven hundred years ago, Timbuktu was a dream destination for scholars, traders, and religious men. At the southern edge of the Sahara desert in what is now Mali, travelers from Europe, sub-Saharan Africa, Egypt, and Morocco met in the bygone metropolis to exchange gold, salt, and ideas. According to a description of Timbuktu in 1526 by the diplomat Leo Africanus, “more profit is to be made there from the sale of books than from any other branch of trade.”

Bundled in camel skin, goat skin, and calf leather, the manuscripts remaining from Timbuktu’s heyday come in an array of sizes. Words from Arabic and African languages, inscribed in gold, red, and jet-black ink, line their pages. Sometimes the text assembles into triangles, or surrounds intricate, geometric designs. I stared at a few ornate pages last September as they were being photographed in an eroded cement building in Mali’s capital, Bamako. Rain pummeled the streets that day, creating pond-size puddles in the dirt road that people would wade through, ankle-deep, without flinching. Upstairs in the building, Abdel Kader Haidara told me how more than 300,000 ancient manuscripts from Timbuktu arrived in Bamako earlier that year.



Abdel Kader Haidara, executive president of the Safeguard and Valorization of Manuscripts for the Defense of Islamic Culture. Amy Maxmen

Haidara wore a rust-colored tunic that reached his ankles and a matching cap, an elegant style typical of men from northern Mali. In an office containing little except for a desk and a rickety bookshelf with a row of books with decorative Arabic across their bindings, he described how his father instructed him to respect the family’s large library. Because of private collections like his, the Timbuktu manuscripts have remained safe with their owners through the generations, rather than ruined or stolen by the parade of powers who have ruled over Timbuktu, including the French, who colonized Mali between 1892 and 1960.

In 1996, Haidara founded an organization to safeguard the manuscripts from weather damage. So when an Al Qaeda-affiliated group invaded Timbuktu last year, destroying tombs and burning any ancient manuscripts they found, Haidara was prepared to help pack his neighbors’ texts into nondescript metal trunks and load them onto donkey carts bound for Bamako. The private collections are now held at an undisclosed location in the city* where scholars might finally, for the first time, lay eyes on them.



The manuscripts were smuggled out of Timbuktu in metal trunks like this one. Eva Brozowsky, Center for the Study of Manuscript Cultures, University of Hamburg.

Subjects in the collections, spanning the 13th through 17th century, include the Koran, Sufism, philosophy, law, medicine, astronomy, and more. Haidara stresses the need for climate-controlled safe-houses for the manuscripts, so that academics can begin to study the books to learn about African history. He thinks the books might also contain information about cures for maladies that persist today. “Every book has answers, and if you analyze them you can learn solutions,” he says. “Everything that exists now, existed before now.” One prime example of this constancy is a plague that has afflicted humans at least since ancient times and currently kills approximately 1.2 million people per year: malaria.

It’s not yet known whether the texts discuss malaria, but it seems likely based on other ancient texts from the region, says Nikolay Dobronravin, a scholar who studies ancient West African manuscripts at St. Petersburg State University in Russia. Dobronravin says African manuscripts contain many passages on tibb, an Arabic word meaning medicine. In one mode of tibb, a healer or teacher writes words from the Koran onto a thin wooden tablet with charcoal-based ink, and the patient washes the tablet down with water. Other, less mystical treatments involve leaves and animal parts consumed as cures for various ailments. “In the Timbuktu collections, a scholar-doctor might have his own book of recipes, comparable to what you find in a cook’s kitchen,” Dobronravin predicts. Villagers might still use some of those herbal remedies today. In a rural, southeastern region of Mali, I saw bundles of leaves sold in the marketplace. My translator told me that villagers boil the leaves to make teas that calm fevers.


 
Leaves and roots sold at a market in Mali are used to treat malaria. Amy Maxmen

This might sound hopeful, until you speak with mothers who have lost children to malaria. Some of them told me they opted for teas and other traditional medicines when their babies fell ill, rather than consult with nurses or doctors. Then the fevers grew worse, convulsions began, and death came swiftly. It’s unfortunately a common story, and one that makes African doctors and health workers weary of the promises of traditional medicine. Still, Haidara says, ancient recipes in the Timbuktu texts could contain forgotten cures that were lost through the ages.

A malaria scientist and doctor at the University of Dakar in Senegal, Badara Cisse, countered Haidara’s assertion by telling me, “In Africa, we are well behind because we love living with our past.” He has watched children die after traditional healers promised their mothers a cure, and it has molded his drive to deliver better solutions though evidence-based science rather than by digging into history. “We need to be open to the new world,” Cisse says.


Timbuktu manuscript page
A page from a Timbuktu manuscript. Eva Brozowsky, Center for the Study of Manuscript Cultures, University of Hamburg.

Yet modern and traditional medicine have at times worked hand-in-hand. A case in point is the modern gold-standard treatment for malaria, artemisinin. The drug derives from the sweet wormwood plant, Artemisia annua, which is listed among other plants that treat fevers in ancient Chinese texts. It was re-discovered in the 1970s as resistance to the malaria drug chloroquine spread around the globe and malaria deaths steadily rose. To curb the death rate, Chairman Mao encouraged Chinese scientists to evaluate hundreds of folk remedies. By 1990, the teams had systematically tested several malaria therapies that included artemisinin, according to a book by historian Dana Dalrymple. In 2001, the World Health Organization (WHO) recommended the treatments for malaria. Finally, in 2005, the rising annual death toll from malaria began to reverse thanks to use of the wormwood-derived drug.

But a new malaria medicine may be needed in the near future if resistance to artemisinin grows. Already, a few cases of resistance have emerged in southeast Asia. Perhaps a clue lies in the Timbuktu texts, but to find it, scholars and researchers must be able to study their pages. As of now, the manuscripts degrade more with each passing year. Haidara has only a few shields to stave off age, moisture, insects, and fungi. To save the books, he and his colleagues rely on funds raised through an Indiegogo page and other private donations, such as a grant from the New York-based Ford Foundation.

Endlessly shifting sands have eroded what was once the golden metropolis of Timbuktu, just as the rain washes away roads outside of Haidara’s headquarters on the day I visit. The manuscripts may contain knowledge that can save thousands of lives, if scholars and scientists reach them before they dissolve into the past, leaving future generations to scour the plant world all over again.

* Correction: The building where the author met Haidara is not the same building where most of the texts are kept, as the article originally stated.

~ Amy Maxmen is senior editor at Nautilus. Her trip to Mali was supported by the Pulitzer Center for Crisis Reporting.

Sunday, July 07, 2013

Alva Noë - “Concepts and Practical Knowledge”


This video is from the 9th International Symposium of Cognition, Logic and Communication, “Perception and Concepts,” May 16-18, 2013, University of Latvia. This talk is the plenary session lecture by philosopher Alva Noë (University of California, Berkeley, USA) - Concepts and Practical Knowledge. Noë's most recent book is Varieties of Presence (2012).

Alva Noe “Concepts and practical knowledge”

By Dmfant


Thursday, May 16, 2013

Organizing the World's Scientific Knowledge to make it Universally Accessible and Powerful


Very interesting, though intensely geeky, talk - "In this high-level talk, we describe a powerful, new knowledge engineering framework for describing scientific observations within a broader strategic model of the scientific process. We describe general open-source tools for scientists to model and manage their data in an attempt to accelerate discovery."

Gully Burns is project leader in the Information Sciences Institute's Information Integration Group, as well as a Research Assistant Professor of neurobiology at USC's College of Letters, Arts and Sciences. He maintains a personal blog called 'Ars-Veritatis, the art of truth'.


Organizing the World's Scientific Knowledge to make it Universally Accessible and Powerful

Published on May 7, 2013

Google Tech Talk
April 30, 2013
Presented by: Gully Burns

ABSTRACT

Not all information is created equal. Accurate, innovative scientific knowledge generally has an enormous impact on humanity. It is the source of our ability to make predictions about our environment. It is the source of new technology (with all its attendant consequences, both positive and negative). It is also a continuous source of wonder and fascination. In general, the value and power of scientific knowledge is not reflected in the scale and structure of the information infrastructure used to house, store and share this knowledge. Many scientists use spreadsheets as the most sophisticated data management tool and only publish their data as PDF files in the literature. 
In this high-level talk, we describe a powerful, new knowledge engineering framework for describing scientific observations within a broader strategic model of the scientific process. We describe general open-source tools for scientists to model and manage their data in an attempt to accelerate discovery. Using examples focussed on the high-value challenge problem: finding a cure for Parkinson's Disease, we present a high-level strategic approach that is both in-keeping with Google's vision and values and could also provide a viable new research that would benefit from Google's massively scalable technology. Ultimately, we present an informatics research initiative for the 21st century: 'Building a Breakthrough Machine".

Speaker Info

Gully Burns develops pragmatic biomedical knowledge engineering systems for scientists that (a) provide directly useful functionality in their everyday use and (b) is based on innovative, cutting edge computer science that subtlely transforms our ability to use knowledge. He was originally trained as a physicist at Imperial College in London before switching to do a Ph.D. in neuroscience at Oxford. He came to work at USC in 1997, developing the 'NeuroScholar' project in Larry Swanson's lab before joining the Information Sciences Institute in 2006. He now works as project leader in ISI's Information Integration Group, as well as a Research Assistant Professor of neurobiology at USC's College of Letters, Arts and Sciences. He maintains a personal blog called 'Ars-Veritatis, the art of truth', and is very interested in seeing how his research in developing systems for scientists could translate to helping and supporting understanding and our use of knowledge in everyday life.

Friday, March 15, 2013

Paul Horwich - Was Wittgenstein Right?


Following up on the previous post, a film biography of Wittgenstein by Derek Jarman, this article by Paul Horwich at the New York Times philosophy column, The Stone, looks at Wittgenstein's conception of the essential problems of philosophy, and his claims that,
there are no realms of phenomena whose study is the special business of a philosopher, and about which he or she should devise profound a priori theories and sophisticated supporting arguments. There are no startling discoveries to be made of facts, not open to the methods of science, yet accessible “from the armchair” through some blend of intuition, pure reason and conceptual analysis. Indeed the whole idea of a subject that could yield such results is based on confusion and wishful thinking. 
This attitude is in stark opposition to the traditional view, which continues to prevail. Philosophy is respected, even exalted, for its promise to provide fundamental insights into the human condition and the ultimate character of the universe, leading to vital conclusions about how we are to arrange our lives. It’s taken for granted that there is deep understanding to be obtained of the nature of consciousness, of how knowledge of the external world is possible, of whether our decisions can be truly free, of the structure of any just society, and so on — and that philosophy’s job is to provide such understanding.
These are not popular views - and Wittgenstein has definitely fallen out of favor, despite having been named in one poll as the most important philosopher of the 20th Century.

NOTE: A response to this post by Michael P. Lynch, Of Flies and Philosophers: Wittgenstein and Philosophy, was published in The Stone later that week.

Was Wittgenstein Right?

By PAUL HORWICH
March 3, 2013


The singular achievement of the controversial early 20th century philosopher Ludwig Wittgenstein was to have discerned the true nature of Western philosophy — what is special about its problems, where they come from, how they should and should not be addressed, and what can and cannot be accomplished by grappling with them. The uniquely insightful answers provided to these meta-questions are what give his treatments of specific issues within the subject — concerning language, experience, knowledge, mathematics, art and religion among them — a power of illumination that cannot be found in the work of others.

Admittedly, few would agree with this rosy assessment — certainly not many professional philosophers. Apart from a small and ignored clique of hard-core supporters the usual view these days is that his writing is self-indulgently obscure and that behind the catchy slogans there is little of intellectual value. But this dismissal disguises what is pretty clearly the real cause of Wittgenstein’s unpopularity within departments of philosophy: namely, his thoroughgoing rejection of the subject as traditionally and currently practiced; his insistence that it can’t give us the kind of knowledge generally regarded as its raison d’être.

Wittgenstein claims that there are no realms of phenomena whose study is the special business of a philosopher, and about which he or she should devise profound a priori theories and sophisticated supporting arguments. There are no startling discoveries to be made of facts, not open to the methods of science, yet accessible “from the armchair” through some blend of intuition, pure reason and conceptual analysis. Indeed the whole idea of a subject that could yield such results is based on confusion and wishful thinking.

Free Press, Ludwig Wittgenstein

This attitude is in stark opposition to the traditional view, which continues to prevail. Philosophy is respected, even exalted, for its promise to provide fundamental insights into the human condition and the ultimate character of the universe, leading to vital conclusions about how we are to arrange our lives. It’s taken for granted that there is deep understanding to be obtained of the nature of consciousness, of how knowledge of the external world is possible, of whether our decisions can be truly free, of the structure of any just society, and so on — and that philosophy’s job is to provide such understanding. Isn’t that why we are so fascinated by it?

If so, then we are duped and bound to be disappointed, says Wittgenstein. For these are mere pseudo-problems, the misbegotten products of linguistic illusion and muddled thinking. So it should be entirely unsurprising that the “philosophy” aiming to solve them has been marked by perennial controversy and lack of decisive progress — by an embarrassing failure, after over 2000 years, to settle any of its central issues. Therefore traditional philosophical theorizing must give way to a painstaking identification of its tempting but misguided presuppositions and an understanding of how we ever came to regard them as legitimate. But in that case, he asks, “[w]here does [our] investigation get its importance from, since it seems only to destroy everything interesting, that is, all that is great and important? (As it were all the buildings, leaving behind only bits of stone and rubble)” — and answers that “(w)hat we are destroying is nothing but houses of cards and we are clearing up the ground of language on which they stand.”

Associated Press, Bertrand Russell, one of Wittgenstein’s early teachers, at his home in London in 1962.

Given this extreme pessimism about the potential of philosophy — perhaps tantamount to a denial that there is such a subject — it is hardly surprising that “Wittgenstein” is uttered with a curl of the lip in most philosophical circles. For who likes to be told that his or her life’s work is confused and pointless? Thus, even Bertrand Russell, his early teacher and enthusiastic supporter, was eventually led to complain peevishly that Wittgenstein seems to have “grown tired of serious thinking and invented a doctrine which would make such an activity unnecessary.”

But what is that notorious doctrine, and can it be defended? We might boil it down to four related claims.

The first is that traditional philosophy is scientistic: its primary goals, which are to arrive at simple, general principles, to uncover profound explanations, and to correct naïve opinions, are taken from the sciences. And this is undoubtedly the case.

The second is that the non-empirical (“armchair”) character of philosophical investigation — its focus on conceptual truth — is in tension with those goals. That’s because our concepts exhibit a highly theory-resistant complexity and variability. They evolved, not for the sake of science and its objectives, but rather in order to cater to the interacting contingencies of our nature, our culture, our environment, our communicative needs and our other purposes. As a consequence the commitments defining individual concepts are rarely simple or determinate, and differ dramatically from one concept to another. Moreover, it is not possible (as it is within empirical domains) to accommodate superficial complexity by means of simple principles at a more basic (e.g. microscopic) level.

The third main claim of Wittgenstein’s metaphilosophy — an immediate consequence of the first two — is that traditional philosophy is necessarily pervaded with oversimplification; analogies are unreasonably inflated; exceptions to simple regularities are wrongly dismissed.

Therefore — the fourth claim — a decent approach to the subject must avoid theory-construction and instead be merely “therapeutic,” confined to exposing the irrational assumptions on which theory-oriented investigations are based and the irrational conclusions to which they lead.

Consider, for instance, the paradigmatically philosophical question: “What is truth?”. This provokes perplexity because, on the one hand, it demands an answer of the form, “Truth is such–and-such,” but on the other hand, despite hundreds of years of looking, no acceptable answer of that kind has ever been found. We’ve tried truth as “correspondence with the facts,” as “provability,” as “practical utility,” and as “stable consensus”; but all turned out to be defective in one way or another — either circular or subject to counterexamples. Reactions to this impasse have included a variety of theoretical proposals. Some philosophers have been led to deny that there is such a thing as absolute truth. Some have maintained (insisting on one of the above definitions) that although truth exists, it lacks certain features that are ordinarily attributed to it — for example, that the truth may sometimes be impossible to discover. Some have inferred that truth is intrinsically paradoxical and essentially incomprehensible. And others persist in the attempt to devise a definition that will fit all the intuitive data.

But from Wittgenstein’s perspective each of the first three of these strategies rides roughshod over our fundamental convictions about truth, and the fourth is highly unlikely to succeed. Instead we should begin, he thinks, by recognizing (as mentioned above) that our various concepts play very different roles in our cognitive economy and (correspondingly) are governed by defining principles of very different kinds. Therefore, it was always a mistake to extrapolate from the fact that empirical concepts, such as red or magnetic oralive stand for properties with specifiable underlying natures to the presumption that the notion of truth must stand for some such property as well.

Wittgenstein’s conceptual pluralism positions us to recognize that notion’s idiosyncratic function, and to infer that truth itself will not be reducible to anything more basic. More specifically, we can see that the concept’s function in our cognitive economy is merely to serve as a device of generalization. It enables us to say such things as “Einstein’s last words were true,” and not be stuck with “If Einstein’s last words were that E=mc2, then E=mc2; and if his last words were that nuclear weapons should be banned, then nuclear weapons should be banned; … and so on,” which has the disadvantage of being infinitely long! Similarly we can use it to say: “We should want our beliefs to be true” (instead of struggling with “We should want that if we believe that E=mc2, then E=mc2; and that if we believe … etc.”). We can see, also, that this sort of utility depends upon nothing more than the fact that the attribution of truth to a statement is obviously equivalent to the statement itself — for example, “It’s true that E=mc2” is equivalent to “E=mc2”. Thus possession of the concept of truth appears to consist in an appreciation of that triviality, rather than a mastery of any explicit definition. The traditional search for such an account (or for some other form of reductive analysis) was a wild-goose chase, a pseudo-problem. Truth emerges as exceptionally unprofound and as exceptionally unmysterious.

This example illustrates the key components of Wittgenstein’s metaphilosophy, and suggests how to flesh them out a little further. Philosophical problems typically arise from the clash between the inevitably idiosyncratic features of special-purpose concepts —true, good, object, person, now, necessary — and the scientistically driven insistence upon uniformity. Moreover, the various kinds of theoretical move designed to resolve such conflicts (forms of skepticism, revisionism, mysterianism and conservative systematization) are not only irrational, but unmotivated.The paradoxes to which they respond should instead be resolved merely by coming to appreciate the mistakes of perverse overgeneralization from which they arose. And the fundamental source of this irrationality is scientism.

As Wittgenstein put it in the “The Blue Book”:
Our craving for generality has [as one] source … our preoccupation with the method of science. I mean the method of reducing the explanation of natural phenomena to the smallest possible number of primitive natural laws; and, in mathematics, of unifying the treatment of different topics by using a generalization. Philosophers constantly see the method of science before their eyes, and are irresistibly tempted to ask and answer in the way science does. This tendency is the real source of metaphysics, and leads the philosopher into complete darkness. I want to say here that it can never be our job to reduce anything to anything, or to explain anything. Philosophy really is “purely descriptive.
These radical ideas are not obviously correct, and may on close scrutiny turn out to be wrong. But they deserve to receive that scrutiny — to be taken much more seriously than they are. Yes, most of us have been interested in philosophy only because of its promise to deliver precisely the sort of theoretical insights that Wittgenstein argues are illusory. But such hopes are no defense against his critique. Besides, if he turns out to be right, satisfaction enough may surely be found in what we still can get — clarity, demystification and truth.




Paul Horwich is a professor of philosophy at New York University. He is the author of several books, including “Reflections on Meaning,” “Truth-Meaning-Reality,” and most recently, “Wittgenstein’s Metaphilosophy.”
The Stone features the writing of contemporary philosophers on issues both timely and timeless. The series moderator is Simon Critchley. He teaches philosophy at The New School for Social Research in New York. To contact the editors of The Stone, send an e-mail to opinionator@nytimes.com. Please include “The Stone” in the subject field.

Thursday, January 17, 2013

Aaron Swartz - Guerilla Open Access Manifesto (July 2008)

Must reading . . . Aaron Swartz died for these beliefs, for opposing the privatization of knowledge.

Guerilla Open Access Manifesto by Aaron Swartz July 2008

Posted on January 15, 2013
by OrsanSenalp

  1. Information is power. But like all power, there are those who want to keep it for themselves. The world’s entire scientific and cultural heritage, published over centuries in books and journals, is increasingly being digitized and locked up by a handful of private corporations. Want to read the papers featuring the most famous results of the sciences? You’ll need to send enormous amounts to publishers like Reed Elsevier.
  2. There are those struggling to change this. The Open Access Movement has fought valiantly to ensure that scientists do not sign their copyrights away but instead ensure their work is published on the Internet, under terms that allow anyone to access it. But even under the best scenarios, their work will only apply to things published in the future. Everything up until now will have been lost.
  3. That is too high a price to pay. Forcing academics to pay money to read the work of their colleagues? Scanning entire libraries but only allowing the folks at Google to read them? Providing scientific articles to those at elite universities in the First World, but not to children in the Global South? It’s outrageous and unacceptable.
  4. “I agree,” many say, “but what can we do? The companies hold the copyrights, they make enormous amounts of money by charging for access, and it’s perfectly legal — there’s nothing we can do to stop them.” But there is something we can, something that’s already being done: we can fight back.
  5. Those with access to these resources — students, librarians, scientists — you have been given a privilege. You get to feed at this banquet of knowledge while the rest of the world is locked out. But you need not — indeed, morally, you cannot — keep this privilege for yourselves. You have a duty to share it with the world. And you have: trading passwords with colleagues, filling download requests for friends.
  6. Meanwhile, those who have been locked out are not standing idly by. You have been sneaking through holes and climbing over fences, liberating the information locked up by the publishers and sharing them with your friends.
  7. But all of this action goes on in the dark, hidden underground. It’s called stealing or piracy, as if sharing a wealth of knowledge were the moral equivalent of plundering a ship and murdering its crew. But sharing isn’t immoral — it’s a moral imperative. Only those blinded by greed would refuse to let a friend make a copy.
  8. Large corporations, of course, are blinded by greed. The laws under which they operate require it — their shareholders would revolt at anything less. And the politicians they have bought off back them, passing laws giving them the exclusive power to decide who can make copies.
  9. There is no justice in following unjust laws. It’s time to come into the light and, in the grand tradition of civil disobedience, declare our opposition to this private theft of public culture.
  10. We need to take information, wherever it is stored, make our copies and share them with the world. We need to take stuff that’s out of copyright and add it to the archive. We need to buy secret databases and put them on the Web. We need to download scientific journals and upload them to file sharing networks. We need to fight for Guerilla Open Access.
  11. With enough of us, around the world, we’ll not just send a strong message opposing the privatization of knowledge — we’ll make it a thing of the past. Will you join us?

Aaron Swartz
July 2008, Eremo, Italy

Thursday, December 20, 2012

Wade Davis - The Wayfinders: Why Ancient Wisdom Matters in the Modern World


Interesting talk about the loss of knowledge as we progressively destroy the few remaining indigenous cultures around the planet. This is one of several videos being pout up online from the Creative Innovation 2012 conference in Australia.



The Wayfinders: Why Ancient Wisdom Matters in the Modern World. Wade Davis


Wade Davis is an Explorer-in-Residence at the National Geographic Society. An ethnographer, writer, photographer, and filmmaker, Davis holds degrees in anthropology and biology and received his Ph.D. in ethnobotany, all from Harvard University. In this talk at the Creative Innovation 2012 conference, Davis speaks about the world's at-risk indigenous cultures, and the vast archive and knowledge and expertise that they represent, and how we can learn from them. November 2012.

Sunday, September 23, 2012

Omnivore Links - Philosophy, Religion, and Science

A veritable bonanza of links from Bookforum's Omnivore blog - on the value of philosophy, the centrality of religion, and how science advances. Enjoy!


 * * * * * * *
 
* * * * * * *
 

Monday, September 03, 2012

Global Workspace Theory and the Future Evolution of Consciousness, Part Four

The Global Workspace Model

This is the fourth part of a multi-part post (originally intended to be two parts) on Bernard Baars' Global Workspace Theory and the future evolution of consciousness.

In Part One, I outlined the basic ideas of GWT, suggesting that it may be the cognitive model that is closest to being integral while still being able explain the actual brain circuitry involved in creating self-awareness, the sense of an individual identity, the development of consciousness through stages, the ability of introspection to revise brain wiring, the presence of multiple states of consciousness, and how relationships and the environment (physical, interpersonal, and temporal) may shape and reshape consciousness.

In Part Two, I established a foundation for a paper that seeks to explain how our consciousness will evolve in the future - The Future Evolution of Consciousness by John Stewart (ECCO Working paper, 2006-10, version 1: November 24, 2006). His work assumes some specialized knowledge of cognitive developmental theory, so that post attempted to provide some solid background for the ideas that will come up in the next posts.

In Part Three, I shared a very recent video of Dr. Baars speaking about Global Workspace Theory - The Biological Basis of Conscious Experience: Global Workspace Dynamics in the Brain - a talk given at the Evolution and Function of Consciousness Summer School ("Turing Consciousness 2012") held at the University of Montreal. This post was a bit of a detour in the sequence, but it seemed a useful detour.

Most recently, I took another detour into attention and consciousness to look at how they are distinct functions with unique brain circuits. Many models incorrectly see the two as so entangled that they must be dealt with as a single entity. This is especially relevant to the GWT model and the evolution of consciousness because attention is a tool to direct consciousness in this explanation of how the brain functions. 

* * * * * * *

At the the end of Part Two, we concluded with one of the primary ideas in Stewart's model, the Declarative Transition, which is the move from Level-I (implicit) procedural knowledge through the E1, E2, and E3 (explicit) phases which constitute the transformation of implicit procedural knowledge into explicit declarative knowledge. Once a skill or process reaches the stage of declarative knowledge, it is rarely called into consciousness unless it is targeted directly by some cue, such as a question or in a conversation that requires the piece of information.

We know now, after years of studying these brain functions, that the more often a memory, skill, or piece of knowledge is recalled and rehearsed, the more strongly wired it becomes in the brain. The old cliche is that "practice makes perfect," but the reality is that practice makes permanent (Robertson, 2009, From Creation to Consolidation: A Novel Framework for Memory Processing).

When a particular skill or procedure has been revised and expanded using declarative knowledge, it becomes automatic and unconscious again through a process of proceduralization. Stewart summarizes the high-level processing made possible by the proceduralization of declarative memory as unconscious schema:
In any particular domain in which a declarative transition unfolds, the serial process of declarative modelling progressively build a range of new resources and other expert processors, including cognitive skills. Once these processors have been built and proceduralized, they perform their specialist functions without loading consciousness—their outputs alone enter consciousness, without the declarative knowledge that went into their construction. The outputs are known intuitively (i.e. they are not experienced as the result of sequences of thought), and complex situations are understood at a glance (Reber 1989). As noted by Dreyfus and Dreyfus (1987), a person who achieves behavioural mastery in a particular field is able to solve difficult problems just by giving them attention—consciousness recruits the solutions directly from the relevant specialist processors.
In this post, we will look at the process of "evolutionary declarative transitions," as well as additional neuroscientific foundations for the evolution of consciousness.

* * * * * * *
It might be useful to begin with a brief overview of declarative knowledge - it's been a while since I last posted in this series. Declarative memory is what we generally refer to when we think about knowledge - the collection of facts and events that we have access to in memory. Declarative knowledge is also often symbolic knowledge in those who have achieved that level of cognitive development (formal operations in Piaget's model). Timon ten Berge and Rene van Hezewijk (1999, Procedural and Declarative Knowledge: An Evolutionary Perspective) offer this additional background on declarative memory:
Declarative knowledge can be altered under the influence of new memories. Declarative knowledge is not conscious until it is retrieved by cues such as questions. The retrieval process is not consciously accessible either; an individual can only become aware of the products of this process. It is also a very selective process. A given cue will lead to the retrieval of only a very small amount of potentially available information. Expression of declarative knowledge requires directed attention, as opposed to the expression of skills, which is automatic (Tulving, 1985).
One important difference between implicit/procedural and explicit/declarative knowledge is that declarative knowledge is located in the brain (the medialtemporal region, parts of the diencephalic system and the hippocampus), while procedural memory is less like a "module" and more accurately seen as. well, a procedure or technique, but it does not seem at this point to be localized.


As far as we know, only humans have integrated a declarative transition in their individual development to any great extent. I suspect this is something we will one day (if not already) be able to identify in many other species, including some primates, whales and dolphins, elephants, the higher corvids (crows and ravens), and in some parrots (only a partial list).

At the same time, we have only studied the declarative transition process in individual development and not in our development as a species. There is little doubt that we began as a species functioning through innate genetic responses to the environment. Over time, we moved toward classical and instrumental conditioning, which allowed procedural memory to be acquired through trial and through observational learning. Eventually, procedural knowledge was transformed to declarative knowledge, which could be "accumulated and transmitted across the generations through the processes of cultural evolution" (Stewart, p. 7).

The problem with the evolution of the human brain is that it followed the laws of selection and adaptation, as evolutionary theory would dictate. So while there are universal adaptations that have become part of our modular mind (Fodor, 1983, The Modularity of Mind), there is a lot less order and organization to these modules than one might want to see.

Gary Marcus describes this evolutionary process of adaptation as a kluge, a term that originated in the world of computer science and means "a clumsy or inelegant solution to a problem" (Marcus, 2009, Kluge: Haphazard Construction of the Human Mind). Part of why this is relevant is because it impacts how we learn and how we remember. It's also important because it operates both at the physical level (the mammalian/limbic brain developed on top of the reptilian brain, and the neocortex developed over the limbic brain - as in the image below) and in the realm of mind, where functions are highly interdependent.


One example of how this plays out is that non-localized procedural memory seems to be an earlier evolutionary trait than the more localized declarative memory (Bloom and Lazerson, 1988, Brain, Mind, and Behavior), which allows procedural memory to be less impacted by brain injuries or lesions (one might lose verbal skills or autobiographical memory, but still be able tie your shoes or ride a bicycle).

This evolutionary kluge in brain/mind development is likely reflected in declarative transitions during our species' development. Declarative transitions probably occurred at different times, in longer or shorter periods of time, and in different domains (cognitive, spatial, emotional, sexual, and so on). How and when this happened was influenced by climate, food supply, language skills, availability of sexual partners, size of clan or tribal groups, and dozens of other factors impossible to quantify.

In fact, this is a perfect example of a complex "dynamical system embedded into an environment, with which it mutually interacts" (Gros, 2008, Complex and Adaptive Dynamical Systems, A Primer). Further:
Simple biological neural networks, e.g. the ones in most worms, just perform stimulus-response tasks. Highly developed mammal brains, on the other side, are not directly driven by external stimuli. Sensory information influences the ongoing, self-sustained neuronal dynamics, but the outcome cannot be predicted from the outside viewpoint. 

Indeed, the human brain is by-the-large occupied with itself and continuously active even in the sustained absence of sensory stimuli. A central theme of cognitive system theory is therefore to formulate, test and implement the principles which govern the autonomous dynamics of a cognitive system. (p. 219)
One issue in this realm is the consistent push toward equilibrium. The brain is always trying to balance the incoming sensory data from the environment with the internal cues from the autonomic (sympathetic and parasympathetic) and somatic nervous systems. Processing all of this information and regulating the body requires a lot of energy - the brain (as less than 2% of our total mass) uses about 20% of the calories we burn each day.

But it also means that we can never think about the human brain/mind without also considering its environment - physical, interpersonal, and temporal. We are physically, environmentally, emotionally, and culturally embedded beings. Trying to conceive of human consciousness without taking all of this into account is a recipe for reductionism.

Most of the complex work performed by the brain occurs below the level of conscious awareness, in what Daniel Kahneman calls (after Keith Stanovich and Richard West) System 1 (fast), the part of our brain responsible for a long list of chores, perhaps as much as 90% of its activity. Here are a few of them, from least to most complex:
  • Detect that one object is more distant than another. 
  • Orient to the source of a sudden sound. 
  • Complete the phrase “bread and…” 
  • Make a “disgust face” when shown a horrible picture.
  • Detect hostility in a voice. 
  • Answer to 2 + 2 = ? 
  • Read words on large billboards. 
  • Drive a car on an empty road. 
  • Find a strong move in chess (if you are a chess master). 
  • Understand simple sentences. 
  • Recognize that a “meek and tidy soul with a passion for detail” resembles an occupational stereotype. (Kahneman, 2011, Thinking, Fast and Slow, Kindle locations 340-347).   
System 2 (slow) is responsible for "paying attention," for giving attention to mental activities that require our awareness and focus - consequently, these activities are disrupted if our attention drifts or we are interrupted. For example:


  • Brace for the starter gun in a race. 
  • Focus attention on the clowns in the circus. 
  • Focus on the voice of a particular person in a crowded and noisy room. 
  • Look for a woman with white hair. 
  • Search memory to identify a surprising sound. 
  • Maintain a faster walking speed than is natural for you. 
  • Monitor the appropriateness of your behavior in a social situation. 
  • Count the occurrences of the letter a in a page of text. 
  • Tell someone your phone number. 
  • Park in a narrow space (for most people except garage attendants). 
  • Compare two washing machines for overall value. 
  • Fill out a tax form. 
  • Check the validity of a complex logical argument. (Kahneman, Kindle locations 363-372)
System 2, which is also equivalent in some ways to working memory (the very brief memory capable of holding about 7 discrete objects at a time), is generally what we think of when we think of reasoning or of consciousness. Importantly, for this discussion, "System 2 has some ability to change the way System 1 works, by programming the normally automatic functions of attention and memory" (Kahneman, Kindle Locations 374-375).

Whenever we are asked to do something - or choose to do something (like a crossword puzzle) - that we do not normally do, we will likely discover that maintaining a mindset for that task will require focus and at least a little bit of effort to stay on task. Nearly (if not all) novel activities to which we devote attention will require effort at first.

In terms of our evolution, this was no doubt true of thinking. At some point in our species' evolution our brains made the declarative transition from unconscious processing and processes to being able to think about our actions or our needs (hunger, sex, warmth, and so on).

Stewart offers the example of thinking skills to demonstrate how the declarative transition may have occurred:
A clear example is provided by the evolution of thinking skills. As with all skills, when thinking first arose, the processes that shaped and structured sequences of thought would have been adapted procedurally. Although the content of thought was declarative, the skills that regulated the pattern of thought were procedural. Individuals would discover which particular structures and patterns of thought were useful in a particular context by what worked best in practice. They would not have declarative knowledge that would enable them to consciously model alternative thinking strategies and their effects. Thinking skills were learnt procedurally, and there were no theories of thought, or thought about thinking.

The development of declarative knowledge about thinking strategies enabled existing thinking skills to be improved and to be adapted to meet new requirements. In particular, it eventually enabled an explicit understanding of what constituted rational and scientific thought, and where and why it was superior. The declarative transition that enabled thinking about thinking was a major transition in human evolution that occurred on a wide scale only within recorded history (see Turchin (1977) and Heylighen (1991) who examine this shift within the framework of metasystem transition theory). This transition remains an important milestone in the development of individuals, and broadly equates to the achievement of what Piaget referred to as the formal operational stage (Flavell 1985). However, many adults still do not reach this level (Kuhn et al. 1977).
This was no doubt one of the momentous leaps in our evolution, probably only second to the rise of self-awareness, to think about thinking, to observe our actions and thoughts from a third-person perspective. That last transition still has not happened for a lot of people who have developed beyond concrete thinking. Along this line, each new increase in the number of perspectives we can take stretches our cognitive skills in new directions.

Stewart suggests that there are still declarative transitions awaiting us.
Clearly, humans as yet have limited declarative knowledge about the central processes that produce consciousness, and little capacity to model and adapt these processes with the assistance of declarative knowledge. (p. 8)
Stewart addresses two of these systems in his paper, (1) the hedonic system and (2) the processes act as a switchboard to decide when the global workspace (consciousness) is occupied by "structured sequences of thought and images."

The hedonic system is almost a stand-in for Freud's id. The hedonic system tells us what we like and dislike, what we want or need, what we desire, and what things we are motivated to do.
Although the hedonic system is the central determinant of what individuals do from moment to moment in their lives, it is not adapted to any extent with declarative knowledge. Individuals generally do not choose deliberatively what it is that they like or dislike, what they desire, or what they are motivated to do. They largely take these as given. They cannot change at will the impact of their desires, motivations and emotions on their behaviour, even when they see that the influence is maladaptive.
Most of what is in the hedonic system has been determined by natural selection or by classical conditioning during our individual development. The point is, however, that is functions as procedural knowledge, or as System 1 thinking. While we would like to think that we are rational creatures and reason out decisions such as who to vote for or what car to buy, more likely these decisions are made by the hedonic system and then, if we need to explain why, we generate a rational explanation for the choice (De Martino, Kumaran, Seymour, and Dolan, 2006).

The other system Stewart addresses is the "switchboard" that determines when consciousness is occupied by structured thought or image sequences - thinking, planning, or worrying, for example.
These structured sequences take consciousness ‘off line’ by loading its limited capacity for the duration of the sequence. While a sequence unfolds, it largely precludes the recruitment by consciousness of resources relevant to other adaptive needs.

Declarative modelling appears to have little input into whether an individual engages in a structured sequence of thought in particular circumstances. It is not something that individuals generally deliberate about or have detailed theories about. We do not, for example, decide whether to engage in a sequence of thought on the basis of declarative knowledge about what is optimal for our adaptability in particular circumstances. Nor is it something that is usually under voluntary control. In fact, we cannot voluntarily stop thought for extended periods as a simple experiment demonstrates—if we attempt to remain aware of the second hand of a watch while we stop thought, we find that sequences of thought will soon arise and fully load consciousness, ending our awareness of the second hand. (p. 10-11)
This last example is familiar to anyone who meditates. Trying to keep our attention focused on the breath is like trying to make a wild monkey sit still (thus the term monkey-mind).

There are a lot of implications for this second issue that Stewart brings to the discussion, but here are three to feel relevant to me.

1) Free will - if we have it or not - will largely depend on our ability to choose what is in our awareness and then to use that skill to monitor and override the hedonic system.

2) In PTSD, one of the greatest challenges for survivors is that memories or flashbacks invade the global workspace and then play themselves out, sometimes on auto-repeat, and there is little the person can do to make it stop (well, there are grounding techniques, but they are somatic-level interventions, which may be the best way to get us out of our minds anyway).

3) Anxiety disorders generally feature a repetitious cognitive script that has a rather unique ability to draw in every situation that has ever gone badly as support for the current anxiety. The script (or as I prefer to understand it, the part) serves the purpose of keeping the person safe, but it does so to the detriment of the person's current life.

Each of those three points are reasons why I think GWT has a lot to add to how we do therapy - not just to how we might evolve our own consciousness.

In the next installment in the this series, we will look at each of these potential declarative transitions in a lot more detail.


Tuesday, May 29, 2012

Diane Rehm - Stuart Firestein: "Ignorance: How It Drives Science"

This segment aired last week or so on NPR's the Diane Rehm Show - it's a look at how "beginner's mind" is essential to progress in science. Firestein calls it ignorance, but that is simply another form of not knowing. Firestein's new book, the topic of the show, is Ignorance: How It Drives Science.

Stuart Firestein: "Ignorance: How It Drives Science"

Tuesday, May 22, 2012 
 
 - (AP Photo/Paul Sancya, File)
(AP Photo/Paul Sancya, File)

“Knowledge is a big subject. Ignorance is bigger...and it is more interesting.” These are the words of neuroscientist Stuart Firestein, the chair of Columbia University’s biology department. Firestein claims that exploring the unknown is the true engine of science, and says ignorance helps scientists concentrate their research. He compares science to searching for a black cat in a dark room, even though the cat may or may not be in there. Firestein's laboratory investigates the mysteries of the sense of smell and its relation to other brain functions. A discussion of the scientific benefits of ignorance.

Guests

Stuart Firestein: Chairman of the Department of Biology at Columbia University, professor of neuroscience.

Related Items 

Read An Excerpt

Reprinted from IGNORANCE: How It Drives Science by Stuart Firestein with permission from Oxford University Press, Inc. Copyright © 2012 by Stuart Firestein.

Friday, April 13, 2012

Wellcome Trust - Metacognition - I know (or don't know) that I know

This is a great article from the Wellcome Trust on Steve Fleming's article from 2010 on Relating Introspective Accuracy to Individual Differences in Brain Structure, or how brain structure differences impact how we think about thinking. The article was originally published in Science (17 September 2010 ), Vol. 329, no. 5998; pp. 1541-1543. DOI: 10.1126/science.1191883 

Dr. Fleming has made the article available as a PDF through his website, as well as a wealth of other articles. 

External links

Feature: Metacognition - I know (or don't know) that I know

27 February 2012. By Penny Bailey
Cortical surface of the brain

At New York University, Sir Henry Wellcome Postdoctoral Fellow Dr Steve Fleming is exploring the neural basis of metacognition: how we think about thinking, and how we assess the accuracy of our decisions, judgements and other aspects of our mental performance.
Metacognition is an important-sounding word for a very everyday process. We 'metacognise' whenever we reflect upon our thinking process and knowledge.

It's something we do on a moment-to-moment basis, according to Dr Steve Fleming at New York University. "We reflect on our thoughts, feelings, judgements and decisions, assessing their accuracy and validity all day long," he says.

This kind of introspection is crucial for making good decisions. Do I really want that bar of chocolate? Do I want to go out tonight? Will I enjoy myself? Am I aiming at the right target? Is my aim accurate? Will I hit it? How sure am I that I'm right? Is that really the correct answer?

If we don't ask ourselves these questions as a kind of faint, ongoing, almost intuitive commentary in the back of our minds, we're not going to progress very smoothly through life.

As it turns out, although we all do it, we're not all equally good at it. An example Steve likes to use is the gameshow 'Who Wants to be a Millionaire?' When asked the killer question, 'Is that your final answer?', contestants with good metacognitive skills will assess how confident they are in their knowledge.

If sure (I know that I know), they'll answer 'yes'. If unsure (I don't know for sure that I know), they'll phone a friend or ask the audience. Contestants who are less metacognitively gifted may have too much confidence in their knowledge and give the wrong answer - or have too little confidence and waste their lifelines.

Metacognition is also fundamental to our sense of self: to knowing who we are. Perhaps we only really know anyone when we understand how, as well as what, they think - and the same applies to knowing ourselves. How reliable are our thought processes? Are they an accurate reflection of reality? How accurate is our knowledge of a particular subject?

Last year, Steve won a prestigious Sir Henry Wellcome Postdoctoral Fellowship to explore the neural basis of metacognitive behaviour: what happens in the brain when we think about our thoughts and decisions or assess how well we know something?

Killer questions

One of the challenges for neuroscientists interested in metacognition has been the fact that - unlike in learning or decision making, where we can measure how much a person improves at a task or how accurate their decision is - there are no outward indicators of introspective thought, so it's hard to quantify.

As part of his PhD at University College London, Steve joined a research team led by Wellcome Trust Senior Fellow Professor Geraint Rees and helped devise an experiment that could provide an objective measure of both a person's performance on a task and how accurately they judged their own performance.

Thirty-two volunteers were asked to look at a series of two very similar black and grey pictures on a screen and say which one contained a brighter patch.

"We adjusted the brightness or contrast of the patches so that everyone was performing at a similar level," says Steve. "And we made it difficult to see which patch was brighter, so no one was entirely sure about whether their answer was correct; they were all in a similar zone of uncertainty."

They then asked the 'killer' metacognitive question: How sure are you of your answer, on a scale from one to six?

Comparing people's answers to their actual performance revealed that although all the volunteers performed equally well on the primary task of identifying the brighter patches, there was a lot of variation between individuals in terms of how accurately they assessed their own performance - or how well they knew their own minds.

Magnetic resonance imaging (MRI) scans of the volunteers' brains further revealed that those who most accurately assessed their own performance had more grey matter (the tissue containing the cell bodies of our neurons) in a part of the brain located at the very front, called the anterior prefrontal cortex. In addition, a white-matter tract (a pathway enabling brain regions to communicate) connected to the prefrontal cortex showed greater integrity in individuals with better metacognitive accuracy.

The findings, published in 'Science' in September 2010, linked the complex high-level process of metacognition to a small part of the brain. The study was the first to show that physical brain differences between people are linked to their level of self-awareness or metacognition.

Intriguingly, the anterior prefrontal cortex is also one of the few parts of the brain with anatomical properties that are unique to humans and fundamentally different from our closest relatives, the great apes. It seems introspection might be unique to humans.

"At this stage, we don't know whether this area develops as we get better at reflecting on our thoughts, or whether people are better at introspection if their prefrontal cortex is more developed in the first place," says Steve.

I believe I do

Although this research and research from other labs points to candidate brain regions or networks for metacognition located in the prefrontal cortex, it doesn't explain why they are involved. Steve plans to use his fellowship to address that question by investigating the neural mechanisms that generate metacognitive reports.

He's approaching the question by attempting to separate out the different kinds of information (or variables) people use to monitor their mental and physical performance.

He cites playing a tennis shot as an example. "If I ask you whether you just played a good tennis shot, you can introspect both about whether you aimed correctly and about how well you carried out your shot. These two variables might go together to make up your overall confidence in the shot."

To evaluate how confident we are in each variable (aim and shot) we need to weigh up different sets of perceptual information. To assess our aim, we would consider the speed and direction of the ball and the position of our opponent across the net. To judge how well we carried out the actual shot, we would think about the position of our feet and hips, how we pivoted, and how we swung and followed through.

There may well have been some discrepancy between the shot we wanted to achieve and the shot we actually made. This is a crucial distinction for scientists exploring decision making. "Psychologists tend to think of beliefs, 'what I should do', as being separate from actions," explains Steve.

"When you're choosing between two chocolate bars, you might decide on a Mars bar - that's what you believe you should have, what you want and value. But when you actually carry out the action of reaching for a bar, you might end up reaching for a Twix instead. There's sometimes a difference there between what you should do and what you actually end up doing, and that's perhaps a crucial distinction for metacognition. My initial experiments are going to try to tease apart these variables."

Research into decision making has identified specific brain regions where beliefs about one choice option (one chocolate bar, or one tennis shot) being preferable to another are encoded. However, says Steve, "what we don't know is how this type of information [about values and beliefs] relates to metacognition about your decision making. How does the brain give humans the ability to reflect on its computations?"

He aims to connect the finely detailed picture of decision making given to us by neuroscience to the very vague picture we have of self-reflection or metacognition.

New York, New York

Steve is working with researchers at New York University who are leaders in the field of task design and building models of decision making, "trying to implement in a laboratory setting exactly the kind of question we might ask the tennis player".

They are designing a perceptual task, in which people will have to choose a target to hit based on whether a patch of dots is moving to the left or right. In other words, people need to decide which target they should hit (based on their belief about its direction of motion), and then they have to hit it accurately (action).

"We can use a variety of techniques to manipulate the difficulty of the task. If we make the target very small, people are obviously going to be more uncertain about whether they're going to be able to hit it. So we can separately manipulate the difficulty of deciding what you should do, and the difficulty of actually doing it."

Once the task is up and running, they will then ask the volunteers to make confidence judgements - or even bets - about various aspects of their performance: how likely they thought it was that they chose the right target, or hit it correctly. Comparing their answers with their actual performance will give an objective measure of the accuracy of their beliefs (metacognition) about their performance.

Drilling down

Such a task will mean Steve and his colleagues can start to decouple the perceptual information that gives people information about what they should do (which target to hit) from the perceptual information that enables them to assess the difficulty of actually carrying out the action (hitting the target).

And that in turn will make it possible to start uncoupling various aspects of metacognition - about beliefs, and about actions or responses - from one another. "I want to drill down into the basics, the variables that come together to make up metacognition, and ask the question: how fine-grained is introspection?"

He'll then use a variety of neuroscience techniques, including brain scanning and intervention techniques such as transcranial magnetic stimulation (to briefly switch off metacognitive activity in the brain), to understand how different brain regions encode information relevant for metacognition. "Armed with our new task, we can ask questions such as: is belief- and action-related information encoded separately in the brain? Is the prefrontal cortex integrating metacognitive information? How does this integration occur? Answers to these questions will allow us to start understanding how the system works."

Since metacognition is so fundamental to making successful decisions - and to knowing ourselves - it's clearly important to understand more about it. Steve's research may also have practical uses in the clinic. Metacognition is linked to the concept of 'insight', which in psychiatry refers to whether someone is aware of having a particular disorder. As many as 50 per cent of patients with schizophrenia have profoundly impaired insight and, unsurprisingly, this is a good indicator of whether they will fail to take their medication.

"If we have a nice task to study metacognition in healthy individuals that can quantify the different components of awareness of beliefs, and awareness of responses and actions, we hope to translate that task into patient populations to understand the deficits of metacognition they might have." With that in mind, Steve plans to collaborate with researchers at the University of Oxford and the Institute of Psychiatry in London when he returns to finish his fellowship in the UK.

A science of metacognition also has implications for concepts of responsibility and self-control. Our society currently places great weight on self-awareness: think of a time when you excused your behaviour with 'I just wasn't thinking'. Understanding the boundaries of self-reflection, therefore, is central to how we ascribe blame and punishment, how we approach psychiatric disorders, and how we view human nature.

Image: An inflated cortical surface of the human brain reconstructed from MRI scans and viewed from the front. Areas of the prefrontal cortex where increased grey matter volume correlated with greater metacognitive ability are shown in hot colours. Credit: Dr Steve Fleming.

Reference

Tuesday, March 13, 2012

David Weinberger - Too Big to Know (2 Versions)


Internet philosopher David Weinberger has a new book out called Too Big to Know: Rethinking Knowledge Now That the Facts Aren't the Facts, Experts Are Everywhere, and the Smartest Person in the Room Is the Room. That's a hell of a title - so here are two different lectures that Weinberger gave on the book.

If you would rather read an interview, Rebecca Rosen interviewed him for The Atlantic in January.

Authors at Google: David Weinberger (Jan 18, 2012)

We used to know how to know. We got our answers from books or experts. We'd nail down the facts and move on. But in the Internet age, knowledge has moved onto networks. There's more knowledge than ever, of course, but it's different. Topics have no boundaries, and nobody agrees on anything.

Yet this is the greatest time in history to be a knowledge seeker . . . if you know how. In Too Big to Know, internet philosopher David Weinberger shows how business, science, education, and the government are learning to use networked knowledge to understand more than ever and to make smarter decisions than they could when they had to rely on mere books and experts.

This groundbreaking book shakes the foundations of our concept of knowledge—from the role of facts to the value of books and the authority of experts—providing a compelling vision of the future of knowledge in a connected world.




David Weinberger: Too Big to Know (Jan 25, 2012)

Noted author David Weinberger discusses topics from his new book, "Too Big to Know: Rethinking Knowledge Now That the Facts Aren't the Facts, Experts Are Everywhere, and the Smartest Person in the Room Is the Room." (Basic Books)

January 25, 2012
Frankfurt Kurnit Klein & Selz, NYC