Saturday, November 20, 2010

Helen Hobbs, M.D, - Genes versus Fast Foods: Eat, Drink and Be Wary

Important lecture from the NIH - please share widely.

Genes versus Fast Foods: Eat, Drink and Be Wary

Insufficient time has elapsed for our genomes to adapt to the caloric abundance and reduced physical activity accompanying industrialization. Diseases of dietary excess, rather than nutritional deficiency are the major causes of death and disability in the Western world. Using human genetics, we have identified new genes and sequence variations conferring susceptibility (and resistance) to metabolic disorders associated with diabetes and heart disease.

Lecture Objectives:
1. Review strategies used to identify genetic variants contributing to common diseases associated with dietary excess.

2. Demonstrate role of genes involved in lipid metabolism in susceptibility (and resistance) to metabolic and cardiovascular disease.

3. Appreciate how human genetics can provide mechanistic insights into the relationship between phenotypes and diseases.

The NIH Director's Wednesday Afternoon Lecture Series includes weekly scientific talks by some of the top researchers in the biomedical sciences worldwide.

Author: Helen Hobbs, M.D, UT Southwestern
Runtime: 00:55:59

Download: Download Video [How to download a Videocast]

Marcelo Gleiser - Reality Is What Our Minds Make Of It

13.7: Cosmos And Culture

This comes from NPR's 13.7: Cosmos And Culture blog, one of my new favorite reads - they feature an interesting selection of bloggers, including Alva Noe and Stuart Kaufman.

I have issues with many of the models that require human consciousness for the existence of the universe - including B Alan Wallace's interpretation of Buddhism and cosmology, among others - and even more so with the nonsense known as "the secret," which is little more than magical thinking writ large.

Gleisier is not saying the same thing here, as this quote makes clear:
. . . since evolution tells us that the human mind is fairly recent, what was going on before humans were around? Clearly, even if there isn’t a mind to think about reality, reality goes on perfectly fine without it. This is not only true before we were around, but also at the majority of the cosmic volume where we are not around and other minds aren’t either.
I agree. Our mind's create our own particular experience of reality, but NOT reality as a whole. The universe existed before us, and it will exist after us.
Upside down
Seth Rader/via Flickr

Because observation gives our world structure, our brains define the reality we perceive.

Last week, I wrote about the notion that reality is composed of many layers, each with its own set of physical laws and principles. Reductionism would pose the opposite, that reality can be traced down to the ultimate, or most fundamental, components of matter.

Under this prism, everything that exists can be explained from the way these material bits interact to create the structures we see in the world. The laws that dictate their behavior are the only fundamental laws; everything else comes from them.

There were many thoughtful responses and comments to my post that deserve to be addressed in more detail. Mostly, they hinged on an old, and quite difficult, philosophical question: what is reality and how do we know? Let’s see what we can do in less than 1000 words.

We can start by contrasting Hume and Kant. Hume, the ultimate empiricist, would claim that all that we know comes from the outside, from sensorial experience. We collect information about the world through our senses (that is, our measurements) and, based on this information, we define what is real. So, a person disconnected from the world, say, someone that grew up without any contact with external stimuli and that was fed intravenously, would not be able to think much: without input we are clueless of what goes on.

Kant would counter that we have a priori “intuitions,” thought structures that give meaning to the sensorial input that Hume considered vital. Without these intuitions, Kant would say, the sensorial input would be meaningless. Two of these intuitions are the a priori notions of space and time; they weave the fabric of reality, connecting data that, without them, wouldn’t make any sense. So, Kant brings the human mind to center stage, crediting it with the construction of reality itself: what we call real depends on our a priori structures. A mind with different a priori structures would have a different sense of the real.

Now, Kant doesn’t dismiss the sensorial input. To him, even though knowledge begins with experience, it doesn’t follow that it arises out of experience. That is, we need the sensorial input to start with but meaning doesn’t come from the input alone. It need to be framed by a priori intuitions, ordered in time, arranged in space.

During the early twentieth century, two revolutions in our understanding of Nature forced us to rethink the neat Kantian order. Einstein’s relativity combined space and time into a single framework, making them dependent on the observer’s perspective. They ceased to be universal quantities, becoming observer-dependent quantities. Of course, Einstein’s theory would actually restore universality, in that it provided the means for different observers to compare their measurements. Still, the net result is that although space and time remain aprioristic (or we should say space-time became a prioristic), they are now imbued with something else, a relation between two or more observers and their relative state of motion.

The second revolution was, of course, the advent of quantum mechanics. For today’s discussion, its most relevant aspect is the relation between observer and observed.

In Kant’s time, the separation between the two was assumed to be absolute: the object existed independently of it being observed. Quantum mechanics will revise this intuition: an object’s physical nature — for example, whether an electron is a particle or a wave — is defined by the act of observation. This implies that the choice made by the observer induces the physical nature of what is being observed. More dramatically, we can state that the observer defines reality. And since the observer has intent and his/her intent comes from his/her mind, it follows that mind defines reality. (This seems to imply something that would need much more unpacking: that free will determines reality!)

Mind still needs a priori intuitions to make sense of the real; but mind also helps determine the real. Impartial objectivity becomes a thing of the past. Mind and reality become weaved into a single whole. Things get a bit confusing, no question about it.

These notions have some interesting and puzzling consequences, and I hope to touch upon some of them in future posts. Here is one: since evolution tells us that the human mind is fairly recent, what was going on before humans were around? Clearly, even if there isn’t a mind to think about reality, reality goes on perfectly fine without it. This is not only true before we were around, but also at the majority of the cosmic volume where we are not around and other minds aren’t either. On the other hand, if there is no one to think about what is real, reality is rather dull.

This may sound like a dangerously humancentric view of reality, I know. But it isn’t. I used the term “humancentrism” in my last book A Tear at the Edge of Creation to stress what I see as our newly-found cosmic importance. I don’t mean our minds have a better view of reality than others. They simply have the view that matters to us. If there are other minds defining their reality out there, all the better. Since their minds would have evolved very differently from ours, their reality will be very different from ours.

So, not only what we call reality evolves along with science (what was real 500 or 100 years ago is very different from what is real today), but it also hinges on the evolutionary history of the mind in question: different planetary histories (and they are all different!), different minds; different minds, different reality. I wonder if Kant would agree to this. I think he would.

Michel Bauwens - Web 4.0 as the next stage in the internet’s evolution?

Wait, we are on Web 4.0 now? What happened to Web 3.0.? This interesting summary of the developmental stages of the Web comes from Michel Bauwens' P2P Foundation site.

Web 4.0 as the next stage in the internet’s evolution?

photo of Michel Bauwens
Michel Bauwens
10th November 2010

Web 3.0 and Web 4.0 are economic development stages as peer to peer networks transform industry and political structures. They herald a return of community knowledge to the people and facilitate mass participation. Who should build these networks? Should these networks be community property or be owned by the private sector.

Marcus Cake writes that:

Web 4.0 achieves a critical mass of participation in online networks that deliver global transparency, governance, distribution, participation, collaboration in industry, political and social networks and other key community endeavours. Web 4.0 delivers community sovereignty to channels and information.

Here is a summary of his evolutionary view of the web’s evolution:

“Opaque channels were necessary, but now need to be replaced Web 1.0 channel refers to the social, political or industrial structures used by our community to distribute information, participate, collaborate and make decisions amongst members of a social group, industry, company or political system. The channel has became the primary means for people to distribute, participate and contribute in different parts of our community. There was no alternative and distribution channels are all pervasive. Channels are also expensive and their private owners seek to maximise profits. This restricted the ability of people to use the channel and they type of products and information that flowed through it. The content of the channel is determined by profit margin and not what is best for the community. This includes information and the goods and services be we consume. Every channel is a potential source of profit.

Web 1.0 channels have sacrificed long term community objectives

Web 1.0 channels in many diverse areas of our community have failed to balance the communities short and long term interests. The unbridled profit motive and lack of transparency of channels in Web 1.0 has delivered our community a succession of global governance failures. This includes national bankruptcy, oil shortages, food shortages, war, pro-cyclical investment cycles, inadequate retirement savings, underinvestment in infrastructure to support our society and an unsustainable way of life. Humanity has a 50% chance of surviving beyond 2100.

Web 2.0 participation gave us an insight into what is possible

Web 2.0 demonstrated the technology to assemble and manage large global crowds with a common interest in social interaction. Large numbers of people joined the internet with broadband penetration reaching 90% in many major economies. Web 2.0 is characterised by information of little inherent value and revenue was derived from third parties wanting to show advertising to eyeballs. Organisations are exploring the potential of Web 2.0 enterprise innovation or industry model innovation with single point solutions such as wikis and forums. However, enterprise or industry innovation will be derived from the full application of Web 3.0 online networks concepts, rather than Web 2.0 point solutions.

Web 3.0 transforms industry and politics with peer to peer structures

The internet provides a costless distribution channel that can connect more than one billion people peer to peer. Major economies are now dominated by 70% services which are information based. Individuals can now create online networks in 90 days for US$25k. These online networks will reshape industry and political systems. Web 3.0 online networks allow people to see through the market or community and collectively match, learn, consume information in hours not months. The key elements of Web 3.0 online networks are outlined here.

Web 4.0 transforms the world with a critical mass of social, industry and political networks

Web 4.0 achieves a critical mass of participation in online networks that deliver global transparency, governance, distribution, participation, collaboration in industry, political and social networks and other key community endeavours. Web 4.0 delivers community sovereignty to channels and information. Global Web 1.0 channels were created over 100 years of mergers, acquisitions and organic growth. Global Web 3.0 online social, industry and political networks can be created within 12 months. The potential of achieving rapid economic development and industry innovation outcomes in a very short time frames is real.”

Friday, November 19, 2010

Frontiers in . . . Cognitive Science, Consciousness, Cultural Psychology, Evolutionary Psychology

Frontiers in is a very cool (and often geeky) open source psychology, neuroscience, cognition, and so on, publisher. This is top level research by top level people, with no pay-per-view wall between the research and the common reader, i.e., me.

Each week I get an update with new articles in the subject areas I am subscribed to - which is all brain/mind/psychology related of course.

I don't have time anymore to post on each of the cool ones, so here are several from this week that I thought were interesting - maybe you ll too. Clicking on the title link will download the PDF of the article. The full citation follows the abstract (except the last one, which is the full article).

Some Insults are Easier to Detect: The Embodied Insult Detection Effect

  • 1Psychology Department, University of Northern British Columbia, Canada
  • 2Psychology Department, University of Calgary, Canada

In the present research we examined the effects of bodily experience on processing of insults in a series of semantic categorization tasks we call insult detection tasks (i.e., participants decided whether presented stimuli were insults or not). Two types of insults were used: more embodied insults (e.g., asswipe, ugly), and less embodied insults (e.g., cheapskate, twit), as well as non-insults. In Experiments 1 and 2 the non-insults did not form a single, coherent category (e.g., airbase, polka), whereas in Experiment 3 all the non-insults were compliments (e.g., eyeful, honest). Regardless of type of non-insult used, we observed facilitatory embodied insult effects such that more embodied insults were responded to faster and recalled more often than less embodied insults. In Experiment 4 we used a larger set of insults as stimuli, which allowed hierarchical multiple regression analyses. These analyses revealed that bodily experience ratings accounted for a significant amount of unique response latency, response error, and recall variability for responses to insults, even with several other predictor variables (e.g., frequency, offensiveness, imageability) included in the analyses: responses were faster and more accurate, and there was greater recall for relatively more embodied insults. These results demonstrate that conceptual knowledge of insults is grounded in knowledge gained through bodily experience.

Keywords: conceptual processing, embodied cognition, insult processing, mental simulation

Citation: Wellsby M, Siakaluk PD, Pexman PM and Owen WJ (2010). Some Insults are Easier to Detect: The Embodied Insult Detection Effect. Front. Psychology doi: 10.3389/fpsyg.2010.00198

* * * * *

A connectionist approach to embodied conceptual metaphor

  • 1Department of Psychology, Stanford University, USA

A growing body of data has been gathered in support of the view that the mind is embodied and that cognition is grounded in sensory-motor processes. Some researchers have gone so far as to claim that this paradigm poses a serious challenge to central tenets of cognitive science, including the widely held view that the mind can be analyzed in terms of abstract computational principles. On the other hand, computational approaches to the study of mind have led to the development of specific models that help researchers understand complex cognitive processes at a level of detail that theories of embodied cognition (EC) have sometimes lacked. Here we make the case that connectionist architectures in particular can illuminate many surprising results from the EC literature. These models can learn the statistical structure in their environments, providing an ideal framework for understanding how simple sensory-motor mechanisms could give rise to higher-level cognitive behavior over the course of learning. Crucially, they form overlapping, distributed representations, which have exactly the properties required by many embodied accounts of cognition. We illustrate this idea by extending an existing connectionist model of semantic cognition in order to simulate findings from the embodied conceptual metaphor literature. Specifically, we explore how the abstract domain of time may be structured by concrete experience with space (including experience with culturally-specific spatial and linguistic cues). We suggest that both EC researchers and connectionist modelers can benefit from an integrated approach to understanding these models and the empirical findings they seek to explain.

Keywords: conceptual metaphor, connectionism, embodiment, models, space, time

Citation: Flusberg SJ, Thibodeau PH, Sternberg DA and Glick JJ (2010). A connectionist approach to embodied conceptual metaphor. Front. Psychology doi: 10.3389/fpsyg.2010.00197

* * * * *

Consciousness and Attention: On sufficiency and necessity

  • 1Division of Biology, California Institute of Technology, USA
  • 2Division of Humanities and Social Sciences, California Institute of Technology, USA
  • 3 Brain Science Institute, Tamagawa University, Japan
  • 4Division of Engineering and Applied Science, California Institute of Technology, USA
  • 5Brain and Cognitive Engineering, Korea University, Korea (South)

Recent research has slowly corroded a belief that selective attention and consciousness are so tightly entangled that they cannot be individually examined. In this review, we summarize psychophysical and neurophysiological evidence for a dissociation between top-down attention and consciousness. The evidence includes recent findings that show subjects can attend to perceptually invisible objects. More contentious is the finding that subjects can become conscious of an isolated object, or the gist of the scene in the near absence of top-down attention; we critically re-examine the possibility of ‘complete’ absence of top-down attention. We also cover the recent flurry of studies that utilized independent manipulation of attention and consciousness. These studies have shown paradoxical effects of attention, including examples where top-down attention and consciousness have opposing effects, leading us to strengthen and revise our previous views. Neuroimaging studies with EEG, MEG and fMRI are uncovering the distinct neuronal correlates of selective attention and consciousness in dissociative paradigms. These findings point to a functional dissociation: attention as analyzer and consciousness as synthesizer. Separating the effects of selective visual attention from those of visual consciousness is of paramount importance to untangle the neural substrates of consciousness from those for attention.

Keywords: attention, consciousness, neuroimaging, psychophysics

Citation: Van Boxtel JJ, Tsuchiya N and Koch C (2010). Consciousness and Attention: On sufficiency and necessity. Front. Psychology doi: 10.3389/fpsyg.2010.00217

* * * * *

What Do Social Groups Have To Do With Culture? The Crucial Role of Shared Experience

  • 1Psychological Studies in Education, Temple University, USA

In an eloquent article in a recent volume of American Psychologist, Cohen (2009) evoked a contentious question: What are the boundaries of culture? To Cohen, the extant psychological literature has been too limited in its almost exclusive emphasis on independent-interdependent self-construal as the prime psychological process characterizing cultural variation, with the variation being limited itself to nationalities and an East-West division. Cohen argued that cultural processes are more complex and diverse, and cultural boundaries are more fine-grained. Cohen’s apt critique noted that many forms of culture are overlooked when psychologists are so limited in their scope. To expand the conceptual space, Cohen urged psychologists to consider other cultural identities such as religion, socioeconomic status, and regional locale, as well as their possible intersections. Signifying “cultural” identities that have nominal labels as potential markers for culture may be interpreted to suggest that group membership is synonymous with cultural processes. Cohen’s view is more sophisticated than that. However, the emphasis on nominal groupings such as religion and SES—to which we could add race, ethnicity, sexual orientation, disability status, etc.—does raise the question: What do these social groups have to do with culture? We argue that a focus on shared meanings of experiences, rather than nominal social groupings, is a more appropriate and productive path toward achieving Cohen’s goal of expanding and refining our understanding of cultural psychological processes. There are several issues that we believe are pertinent to the relations between social groups and culture. One is the important recognition that all nominal groupings are themselves cultural constructions: social schemas that emerged through social interaction in particular contexts to fulfill conceptual and practical functions in ritualized social life. The meanings of these nominal groupings have very fuzzy boundaries that render group inclusion criteria messy; they change continuously; and they reflect the purposes of those employing the categories more than the characteristics of the group members. Clearly this is the case with the more obviously malleable categories: low SES means quite different things and the category would include people with different economic characteristics depending on the country, the historical period, the political result of deliberations among economists, the purpose of the researchers, and the access to different kinds of data. But, even social categories that in layperson terms have essential properties, such as gender, have previously taken on different meanings and continue to have fuzzy and dynamic boundaries, as is apparent by those whose lifestyles challenge the reification of these labels (e.g., GLBT). As cultural phenomena, nominal groupings should be themselves a topic for study in cultural psychology as they are in other scholarly fields (Brubaker, 2009). Even more pertinent to the current opinion is the understanding, which not incidentally is shared by Cohen, that each grouping includes people—self-identified or otherwise—who differ in many significant cultural-psychological characteristics and dimensions. By focusing on the nominal group, psychologists are running the risk of over-looking more significant processes; and, clearly, of stereotyping. This is not to say that group membership has no significance to cultural-psychological processes. Whereas nominal groupings do not have ontological existence, they are an important element in the social-political reality. The cultural-political construction of certain groupings creates experiences that are shared by group members in ways that may, indeed, result in cultural processes. The category of “immigrant” could serve as a case in point. While far from being equalized across all immigrant groups, US immigration policy does treat similarly people who may otherwise share very little with each other (e.g., language, beliefs, values, lifestyles), except for their immigration experience. Similar treatment may result with some shared meaning about the immigration experience. Such shared experiences may manifest, perhaps, in a relieved understanding smile exchanged by two very different people after finishing the lengthy admission process at JFK’s INS offices; to borrow from Geertz (1973)—“a speck of behavior, a fleck of culture, and – voilà! – a gesture” (p. 6). Of course, as Geertz noted, “that…is just the beginning” (p. 6). The prevalent effects of social-political grouping—be they the consequence of formal policy or informal perceptions and norms—may result in collective experiences (e.g., discrimination, differential opportunities, expected behavior), which, in turn, may lead to shared meanings and hence to cultural-psychological processes: cognitive, emotional, motivational, and behavioral manifestations of those shared meanings. These processes clearly merit investigation and intervention. Yet, it would be a grave mistake to assume a-priori that each immigrant to the US—or, each attendant to a Christian church, each citizen earning under $30,000, each resident of a south-western state—shared the same experiences or made the same meaning of collective experiences. Perhaps the cultural-psychological processes most relevant for understanding these people’s actions in particular contexts are rooted in shared experiences that cut across social categories: attending the same public school; commuting during rush hour; relocating after a flood… There is a seeming tension between understanding that, on the one hand, nominal groupings are dynamic cultural constructions; group members are psychologically and culturally diverse; social group labels, or “cultural” identities, are, in fact, not synonymous with culture; and recognizing on the other hand that despite their non-essentialist nature, group memberships may involve common experiences that result in some cultural processes. What may psychologists interested in cultural-psychological processes do? One way to address this challenge is by careful reflection on the formulation of research questions. Arguably, cultural-psychological processes emerge from and manifest in shared experiences in lived contexts (Cole, Engeström, & Vasquez, 1997). Researchers might begin with those lived contexts that play important roles in people’s lives and seek the shared meanings of actions in these contexts. Indeed, many cultural psychology researchers already practice such a perspective (e.g., Lawrence, Agnes, & Valsiner, 2004; Sherry, Wood, Jackson, & Kaslow, 2006). In turn, researchers who are interested in the role of the social-political reality of social groups in culture ought to pose research questions with acute sensitivity to social-political-historical processes and should proceed with the awareness that group memberships are cultural constructions and, consequently, political realities rather than reified entities. Perhaps these conclusions are similar to those Cohen aimed at. We are in full agreement with his challenge to the currently dominant paradigm that focuses on a small number of dimensions generalized across broad nominal categories. However, following many others (e.g., Betancourt & Lopez, 1993; Bruner, 1990; Shweder & Sullivan, 1993), we caution against the emphasis on nominal group labels as the obvious and unproblematic entry point for conceptualizing and investigating cultural-psychological processes.

Keywords: collective experience, cultural psychology, culture, nominal social groups

Citation: Bergey BW and Kaplan A (2010). What Do Social Groups Have To Do With Culture? The Crucial Role of Shared Experience . Front. Psychology doi: 10.3389/fpsyg.2010.00199

* * * * *

Did insecure attachment styles evolve for the benefit of the group?

  • Department of Anthropology, University of California at Los Angeles, Los Angeles, CA, USA

A commentary on:

The attachment paradox: how can so many of us (the insecure ones) have no adaptive advantages?
by Ein-Dor, T., Mikulincer, M., Doron, G., and Shaver, P. R. (2010). Perspect. Psychol. Sci. 5, 123–141.

In a recent article in Perspectives on Psychological Science, Ein-Dor et al. (2010) propose that insecure attachment styles harm the biological fitness of individuals, yet may have been favored by natural selection because they provide benefits for the group. This novel hypothesis proclaims that groups containing a mixture of secure and insecure attachment styles deal more effectively with hazards, such as venomous snakes or fires, because of earlier detection and escape. While I support adaptationist approaches to development, including attachment, I have concerns about this specific proposal. In particular, I question: (1) that insecure attachment styles are detrimental to individual fitness, (2) that insecure attachment styles are well-designed for dealing with danger at the group-level, (3) the underlying assumption that human attachment styles evolved in social groups comprised mostly of genetic relatives, (4) whether the empirical evidence, provided by the authors, can arbitrate between hypotheses postulating benefits to individuals versus benefits to groups.

Ein-Dor et al. (2010) set out to explain an “evolutionary paradox”: insecure attachment styles appear harmful to individual fitness, yet they are prevalent in human societies. Studies show that 33–50% of all humans may be insecurely attached (i.e., anxious, avoidant), across age groups, with higher percentages occurring in populations living in conditions of poverty and instability (Cassidy and Shaver, 1999, 2008; Mikulincer and Shaver, 2007). The solution to the “paradoxical” persistence of insecure attachment styles, according to Ein-Dor et al., is that across evolutionary time, the costs of insecure attachment styles to individuals were exceeded by benefits at the group-level. These group-level benefits are considered a driving selective force, not accidental byproducts of strategies that are individually advantageous – hence the paradox. The central idea is that insecure attachment styles are suboptimal to the individual, yet prevalent: this “evolutionary paradox” is resolved by positing group-level benefits. On this view, insecurely attached individuals are evolutionary altruists: they incur a fitness cost to enhance the fitness of other individuals in the group.

Here, I first question the existence of the “attachment paradox” and then the solution proposed by Ein-Dor et al. (2010). For attachment research to benefit from evolutionary biology, it is important that ideas about evolutionary processes, as well as assumptions about our human evolutionary history, are correct and, whenever possible, complete. Thus, an integration of evolutionary and developmental science requires, in addition to empirical studies, conceptual analyses and discussion of key premises.

The hypothesis of Ein-Dor et al. (2010) assumes that insecure attachment styles are maladaptive to the individual: however, the authors do not provide sources to support this claim. To my knowledge, the fitness effects of attachment strategies have never been measured in humans. It would be most informative if studies compared the number of viable offspring (or a different proxy for fitness) of individuals with insecure versus other attachment styles, in conditions in which insecure attachment styles tend to develop. Ideally, such studies would be conducted cross-culturally, in order to ensure results generalize across socio-ecological conditions, or to document and understand variation (Henrich et al., 2010). However, at present, the fitness costs and benefits of different attachment styles are unknown. Therefore, “the attachment paradox” itself is a hypothesis, not a fact requiring explanation. Moreover, some theories suggest that insecure attachment styles can be advantageous to individuals, given particular conditions (e.g., Belsky et al., 1991; Chisholm, 1996; Nettle, 2006; Del Giudice, 2009; Del Giudice and Belsky, 2010). Insecure attachment styles may be adaptive, for instance, if one grows up in a world where people generally provide little support (Belsky et al., 1991; Belsky et al., 2010). Ein-Dor et al. (2010) view their proposal as complementary to this perspective. However, the existing work assumes insecure attachment styles are advantageous to individuals, while Ein-Dor et al. (2010) depart from the exact opposite assumption – the “evolutionary paradox” – making integration difficult. Still, I will argue that even if we grant that insecure attachment styles may harm individual fitness, explaining their evolution in terms of adaptive, group-level benefits, has several problems.

In biology, adaptations are identified when a trait accommodates a presumed function “with sufficient precision, economy, [and] efficiency” (Williams, 1966, p. 10). The proposal of Ein-Dor et al. (2010), in my view, does not meet these criteria. Ein-Dor et al. argue that two major insecure attachment styles – avoidant and anxious – evolved for their group-level benefits: “The avoidant pattern may be associated with quick, independent responses to threat, which may at times increase the survival chances of group members by solving the survival problem or demonstrating ways to escape it. The anxious pattern may be associated with sensitivity and quick detection of dangers and threats, which alert other group members to danger and the need for protection or escape” (p. 129). Both these functions address some features of insecure attachment styles, such as social withdrawal and high levels of stress. However, they do not address other features that may be fitness-relevant, such as: low self-esteem, greater risk of depression, mixed feelings about relationships, indiscriminate self-disclosure, ineffective coping strategies, and over-dependence on others. While it may be possible to advance group-level benefits for these features as well, Ein-Dor et al. do not discuss them in depth. In biology, adaptationist accounts are considered most convincing when a close correspondence is revealed between the structure of an adaptive problem and the features of its solution: it is not sufficient to select some features, and hypothesize about their adaptive value, while leaving out other, equally significant ones.

Concerning ancestral social organization, Ein-Dor et al. (2010) assert that all members of a group, “in the environment of evolutionary adaptedness, would often have been genetically related” (p. 124). This premise, if correct, helps their proposal that insecure attachment styles are group-level adaptations, because the costs incurred by altruistic individuals may be compensated by gains in inclusive fitness (Hamilton, 1964): genetic relatives are more likely to share copies of the same alleles, and so helping kin implies furthering one’s own reproductive success. However, Ein-Dor et al. (2010) do not provide sources to support their claim that, historically, humans lived in groups composed primarily of kin. To my knowledge, no such sources exist. Unfortunately, the precise characteristics of ancestral social organizations remain largely unknown. If anything, current evidence seems to suggest that humans lived in diverse patterns of social organization, rather than a single one (Schrire, 1980; Foley, 1995; Irons, 1998; Marlowe, 2005; Richerson et al., 2009).

Still, even if we grant that humans would have lived in kin-based households, such households may have co-existed in larger groups consisting of genetically non-related individuals. To what extent the argument of Ein-Dor et al. (2010) formally depends on “kin-groups assumption” is hard to know. Analyzing it requires more detailed specification of the costs and benefits of altruistic strategies in various group compositions (including the ratio of relatives to non-relatives, and their degrees of relatedness). Biologists have made progress exploring the conditions favoring the evolution of altruism (e.g., Nowak, 2006): yet, much work remains to be done, especially in the domain of large-scale cooperation. It is a merit of Ein-Dor et al. (2010) to invite more discussion about human ancestral social organizations. A better understanding of these contexts may provide insights into the evolved structure of developmental mechanisms, including their dynamic expression across the full breadth of conditions our species experiences (Panchanathan et al., 2010).

Finally, Ein-Dor et al. (2010) present two kinds of evidence – cognitive and behavioral – to support their hypothesis that insecure attachment styles evolved for benefits at the group-level. The cognitive evidence shows that anxiously attached individuals detect potential threats relatively fast and alert others about imminent danger. It also indicates that avoidant individuals initiate self-preservation efforts relatively fast, without relying on the help of other people. Both these empirical results are interesting; however, they do not substantiate that these cognitive aspects are adaptations “for” the benefit of social groups. This is because in each example, individuals may themselves derive a net benefit from their strategy: anxious individuals because early detection of danger facilitates escape, while alerting others can induce collective efforts to ameliorate the threat: avoidant individuals because a focus on autonomous self-preservation may be adaptive, in a world where other people provide little social support (Belsky et al., 1991). Thus, the cognitive evidence cannot distinguish between individual-level and group-level benefits. The behavioral evidence provided by Ein-Dor et al. (2010) employs a smoke-in-the-room experimental setting, and shows that: “more heterogeneous groups in terms of attachment orientations were … more effective in dealing with the dangerous situation and took less time to detect and deal with the danger” (p. 135). However, it is not clear why a group-selection perspective would predict that heterogeneous groups perform better than homogeneous groups, composed exclusively of insecurely attached individuals. Is it not the case that a larger number of vigilant eyes is better in dangerous situations?

In sum, while I support adaptationist approaches to development, including attachment, I believe there are significant problems with the hypothesis advanced by Ein-Dor et al. (2010): that insecure attachment styles may be group-selected adaptations for dealing with danger. Despite these doubts, I value the novelty of the hypothesis, and I look forward to future theoretical analyses and empirical tests of the current ideas.


I thank Robert Bettinger, Ron Dotsch, Richard McElreath, Hiske Hees, Joe Henrich, Sarah Mathew, Karthik Panchanathan, Pete Richerson, and Christopher Stephan for valuable comments.


Belsky, J., Houts, R. M., and Fearon, R. M. P. (2010). Infant attachment security and the timing of puberty: testing an evolutionary hypothesis. Psychol. Sci. 21, 1195–1201.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Citation: Belsky, J., Steinberg, L., and Draper, P. (1991). Childhood experience, interpersonal development, and reproductive strategy: an evolutionary theory of socialization. Child Dev. 62, 647–670.

BBC - Can brain scans tell us who makes a good chief executive?

Hmmmm . . . . Not sure this is going to reveal much of anything useful - there is much to being a good leader that is intangible, based on experience, based on interpersonal skills, and so on. We're not likely to see that in the brain, in my opinion.

Can brain scans tell us who makes a good chief executive?

Brain scans could reveal leadership ability

watch the video
Sir John Madejski is about to find out what is going on inside his head.

After final preparations by a team of scientists the leading British businessman lies down on a stretcher and is wheeled gently into an MRI scanner.

But Sir John is not ill. The 45-minute brain scan is part of a unique experiment to try to work out whether science can be applied to the study of leadership.

Neuroscientists, psychologists and management experts at Reading University are collaborating on a study which aims to examine the brains of chief executives and leaders in other field like the military or voluntary organisations.

Dr Kevin Money of Henley Business School, now part of Reading University, explains the aims: "We hope to look at how leaders from different sectors make decisions, what actually leads people to move from making good to bad decisions, what goes on in people's minds and how they make those choices."

Inside the scanner, Sir John is not just having a rest, he is completing a series of exercises.

Professor Douglas Saddy of Reading's Centre for Integrative Neuroscience and Neurodynamics looks on as the businessman presses a keypad to make various financial decisions by pressing buttons: "In this case," he explains, "what he is being asked to do is make a judgement about whether given a certain set of information a short-term reward would be better than a long-term reward."

While he presses the keypad his brain activity is being measured. The results of this and a number of other scans will be aggregated to try to draw out some lessons.

Sir John emerges from the £1m scanner looking cheerful enough.

Brain scan
Spot the business leader

"I think they found my brain," he jokes. The entrepreneur has made enough money from a string of businesses to buy a football club and endow a Centre for Reputation at Henley Business School.

He is enthusiastic about the project and has promised to encourage fellow tycoons to submit their brains for scanning.

Dr Money is cautious about promising instant results from this research: "It's way too early, we can't look at one person's brain and conclude too much. What we can do is look at different groups, say military and business leaders, and compare leadership education within those different groups."

Professor Peter Saville
Peter Saville believes in the power of psychometric testing.

But using technology to examine what makes a good leader is nothing new. For many decades organisations around the world have used psychometric testing to help choose candidates for senior positions, and to try to understand what constitutes a good leader.

But psychometrics is a controversial science, with some critics suggesting it makes claims that cannot be substantiated.

Professor Peter Saville has run businesses supplying psychometric techniques for more than 30 years.

He outlines for me a history of his science which he says stretches back to techniques used by Samuel Pepys to select naval officers, and insists that it makes a valuable contribution to the process of choosing job candidates: "You still find interviewers who judge people on the first minute of an interview," he says. "All we are doing is reducing the odds of choosing the wrong person. It's science versus sentiment."

Non-strategic me

Then Professor Saville sets me a psychometric test of my leadership skills. It involves some 36 quite complex questions, where I am asked to rank my own skills - from decisiveness to strategic thinking.

Often, I am asked to decide between aspects of my personality that are not mutually exclusive - whether I seek to consult other members of the team, whether I am keen to promote my own work.

After I complete the questionnaire, Professor Saville hands me a report on my leadership skills.

It is not encouraging. "You come in the bottom 2% of the population for strategic vision," he tells me. He tactfully tries to reassure me that I have scored very highly as a networker and a communicator - important skills for a journalist - but makes it clear that I am not going to be asked to lead some major organisation any time soon.

Virginia Eastman
Headhunter Virginia Eastman does not believe brain scanners
will force her to look for a new job.

So is there a chance that a recruitment industry which already uses psychometrics will now look to other techniques, including perhaps brain scanning? One headhunter is sceptical.

Virginia Eastman of Heidrick and Struggles hunts down candidates for senior roles in global media organisations. She says that new technology is helping to make the process of communicating with and assessing suitable leaders more rapid, but it only goes so far: "Our whole profession is built on one thing, the consensus that we all know what good looks like, and that we make that judgement. No machine can replace that."

Neuroscientists and psychologists believe they can make a real contribution to our understanding of what makes leaders tick.

But for now, those whose job it is to select leaders still believe it is more of an art than a technology.

Bonnitta Roy - Evo-Devo and the Post-Postmodern Synthesis: What Does Integral Have to Offer?

Very interesting and challenging post from the always brilliant Bonnitta Roy has been posted over at Beams and Struts, one of the very cool new breed of integral blogs pushing the model in new post-Wilberian directions.

Evo-Devo and the Post-Postmodern Synthesis: What Does Integral Have to Offer?

Written by Bonnitta Roy



I am currently working on an article about epistemic challenges to evolutionary theory, and it seemed timely to receive an invitation from Chris Dierkes to contribute to the ongoing discussion here at beamsandstruts on evolution. More specifically for this audience, I am addressing the question of what does integral have to offer to evolutionary theory as it moves into its post- postmodern phase. The various new approaches to evolutionary thinking I am researching, are post-postmodern in the sense that the theorists are themselves aware that a theory of evolution is both created within and constrained by the epistemic, conceptual framework any particular theory is working from. These new approaches to evolutionary theory are part of a larger new inquiry into science studies in the wake of the postmodern assessment of scientific reason. There is, for example, a number of Philosophers of Science who are trying to define a “naturalistic turn” that would serve as a post postmodern re­-construction of science. This, too, requires inquiry into various conceptual assumptions and frameworks that have become embedded in the scientific world-view, as well as some delicious thinking about entirely new conceptual tools with which to approach science. Evolutionary theory is reaping exciting benefits from this “naturalistic turn” in particular, through an emerging field of theory and research that is attempting a grand synthesis of evolution and development, called Evo-Devo.

It is easy to recognize Evo-Devo’s naturalistic turn in Lewontin’s words quoted in Integrating Evolution and Development.[1]

All sciences, especially biology, have depended on dominant metaphors to inform their theoretical structures and to suggest directions in which the science can expand and connect with other domains of inquiry. Science cannot be conducted without metaphors. Yet, at the same time, these metaphors hold science in an eternal grip and prevent us from taking directions and solving problems that lie outside their scope. p. 37

The epistemic challenge for a naturalized science of Evo-Devo is, as Callebaut notes in the same book

Theoretical perspectives coordinate models and phenomena; such coordination is necessary because phenomena are complex, or scientific interests in them are heterogeneous, and the number of possible ways of representing them in models is too large. Adequate theorizing may require a variety of perspectives and models—a point worth keeping in mind in discussing what the “right” account of evo-devo is. p.38

One primary candidates for an adequate account is Susan Oyama’s developmental systems theory. Oyama is both a psychologist and philosopher of science, and her work The Ontogeny of Information: Developmental Systems and Evolution, is regarded as the foundational text in the field. Evan Thompson’s enactive approach attempts to carry DST (developmental systems theory) forward by interweaving through it a theory of the phenomenology of autopoeitic systems.

Not surprisingly, given its postmodern sensibilities, the naturalistic turn in science has also embarked on a re-conceptualization of socio-cultural evolution. There is an interesting twist here in which the notion of socio-cultural evolution is being extended “back” into biological evolutionary theory by asking new questions about the “fundamental unit of evolution,” The answer it seems, may turn out to look more like socio-cultural adaptation and its relatedness to the environment, than any current theory based on a combination of genetic and epigenetic forces and natural selection processes in the environment.

Again, in Lewontin’s words,

Any theory of the evolution of human life which begins with what are said to be individual biological constraints on individuals, and tries to create a picture of society as the sum of those constraints, misses what is really essential about the social environment, which is that in moving from the individual to the social level we actually change the properties of objects at the lower level. This whole problem of levels of explanation, of levels of evolution, of levels of action, is one of the deepest ones with which we have to deal in our understanding not only of sociobiology, but of evolution in general.[2]

I hope this short introduction to my research gives you a taste of how exciting these times are for evolutionary and developmental theory as well as for philosophers who are looking at the activity from a meta-theoretical level.

Go read the whole article.

Geshe Gedun Lodro - Our motivation is the welfare of all sentient beings

Achieving Spiritual Transformation Through Meditation

by Geshe Gedun Lodro, translated and edited by Jeffrey Hopkins

Dharma Quote of the Week

There is both a reason and a purpose for cultivating the meditative stabilization observing exhalation and inhalation of the breath. The reason is mainly to purify impure motivations. What exactly is to be purified? The main of these are the three poisons--desire, hatred, and obscuration. Even though we have these at all times and even though the meditator will still retain them, she or he is seeking to suppress their manifest functioning at that time. The specific purpose for cleansing impure motivations before meditation is to dispel bad motivations connected with this lifetime, such as having hatred toward enemies, attachment to friends, and so forth.

In terms of the practice I am explaining here, even the thought of a religious practitioner of small capacity is included within impure motivations; such a person engages in practice mainly for the sake of a good future lifetime. Similarly, if on this occasion one has the motivation of a religious practitioner of middling capacity--that of only oneself escaping from cyclic existence, this is also impure.

What is a pure motivation? To take as one's aim the welfare of all sentient beings. This is the motivation of a religious practitioner of great capacity. Meditators should imagine or manifest their own impure motivation in the form of smoke, and with the exhalation of breath should expel all bad motivation. When inhaling, they should imagine that all the blessings and good qualities of Buddhas and Bodhisattvas, in the form of bright light, are inhaled into them. This practice is called purification by way of the descent of ambrosia. There are many forms of this purification, but the essence of the practice is as just indicated.

--from Calm Abiding and Special Insight: Achieving Spiritual Transformation Through Meditation by Geshe Gedun Lodro, translated and edited by Jeffrey Hopkins, published by Snow Lion Publications

Calm Abiding and Special Insight • Now at 5O% off
(Good through November 26th).

Authors@Google: Kevin Kelly - "What Technology Wants"

Interesting - I've not read a lot of Kelly's stuff, other than a few blog posts here and there. Didn't know he was working in the singularity domain. His new book sounds interesting. His approach seems to involve taking on the perspective of technology experientially to understand its "mind."
What Technology Wants

Kevin Kelly will be speaking about his latest book, "What Technology Wants." This provocative book introduces a brand-new view of technology. It suggests that technology as a whole is not just a jumble of wires and metal but a living, evolving organism that has its own unconscious needs and tendencies. Kelly looks out through the eyes of this global technological system to discover "what it wants." Kelly uses vivid examples from the past to trace technology's long course, and then follows a dozen trajectories of technology into the near future to project where technology is headed.

This new theory of technology offers three practical lessons: By listening to what technology wants we can better prepare ourselves and our children for the inevitable technologies to come. By adopting the principles of pro-action and engagement, we can steer technologies into their best roles. And by aligning ourselves with the long-term imperatives of this near-living system, we can capture its full gifts.

Speaker Info: Kevin Kelly

Kevin Kelly is Senior Maverick at Wired magazine. He co-founded Wired in 1993, and served as its Executive Editor from its inception until 1999. He has just finished a book for Viking/Penguin called "What Technology Wants," published October 18, 2010. He is also editor and publisher of the Cool Tools website (, which gets half a million unique visitors per month. From 1984-1990 Kelly was publisher and editor of the Whole Earth Review, a journal of unorthodox technical news. He co-founded the ongoing Hackers' Conference, and was involved with the launch of the WELL, a pioneering online service started in 1985. He authored the best-selling New Rules for the New Economy and the classic book on decentralized emergent systems, Out of Control.

Thursday, November 18, 2010

Douglas Fox (Discover Magazine) - Virus Causes Schizophrenia?

An article from the June issue of Discover Magazine posits that schizophrenia is caused by a virus we all carry in our DNA. If this turns out to be true, it could open whole new possibilities for treatment - and more importantly, early detection, before the hallucinations begin.

One important quote from later in the article:
The infection theory could also explain what little we know of the genetics of schizophrenia. One might expect that the disease would be associated with genes controlling our synapses or neurotransmitters. Three major studies published last year in the journal Nature tell a different story. They instead implicate immune genes called human leukocyte antigens (HLAs), which are central to our body’s ability to detect invading pathogens. “That makes a lot of sense,” Yolken says. “The response to an infectious agent may be why person A gets schizophrenia and person B doesn’t.”
Big clue in those studies, along with Dr. Fuller Torrey's observations that the blood of schizophrenics contains immune cells one might expect to see in mononucleosis.

The Insanity Virus

Schizophrenia has long been blamed on bad genes or even bad parents. Wrong, says a growing group of psychiatrists. The real culprit, they claim, is a virus that lives entwined in every person's DNA.

by Douglas Fox

From the June 2010 issue; published online November 8, 2010

Steven and David Elmore were born identical twins, but their first days in this world could not have been more different. David came home from the hospital after a week. Steven, born four minutes later, stayed behind in the ICU. For a month he hovered near death in an incubator, wracked with fever from what doctors called a dangerous viral infection. Even after Steven recovered, he lagged behind his twin. He lay awake but rarely cried. When his mother smiled at him, he stared back with blank eyes rather than mirroring her smiles as David did. And for several years after the boys began walking, it was Steven who often lost his balance, falling against tables or smashing his lip.

Those early differences might have faded into distant memory, but they gained new significance in light of the twins’ subsequent lives. By the time Steven entered grade school, it appeared that he had hit his stride. The twins seemed to have equalized into the genetic carbon copies that they were: They wore the same shoulder-length, sandy-blond hair. They were both B+ students. They played basketball with the same friends. Steven Elmore had seemingly overcome his rough start. But then, at the age of 17, he began hearing voices.

The voices called from passing cars as Steven drove to work. They ridiculed his failure to find a girlfriend. Rolling up the car windows and blasting the radio did nothing to silence them. Other voices pursued Steven at home. Three voices called through the windows of his house: two angry men and one woman who begged the men to stop arguing. Another voice thrummed out of the stereo speakers, giving a running commentary on the songs of Steely Dan or Led Zeppelin, which Steven played at night after work. His nerves frayed and he broke down. Within weeks his outbursts landed him in a psychiatric hospital, where doctors determined he had schizophrenia.

The story of Steven and his twin reflects a long-standing mystery in schizophrenia, one of the most common mental diseases on earth, affecting about 1 percent of humanity. For a long time schizophrenia was commonly blamed on cold mothers. More recently it has been attributed to bad genes. Yet many key facts seem to contradict both interpretations.

Schizophrenia is usually diagnosed between the ages of 15 and 25, but the person who becomes schizophrenic is sometimes recalled to have been different as a child or a toddler—more forgetful or shy or clumsy. Studies of family videos confirm this. Even more puzzling is the so-called birth-month effect: People born in winter or early spring are more likely than others to become schizophrenic later in life. It is a small increase, just 5 to 8 percent, but it is remarkably consistent, showing up in 250 studies. That same pattern is seen in people with bipolar disorder or multiple sclerosis.

“The birth-month effect is one of the most clearly established facts about schizophrenia,” says Fuller Torrey, director of the Stanley Medical Research Institute in Chevy Chase, Maryland. “It’s difficult to explain by genes, and it’s certainly difficult to explain by bad mothers.”

The facts of schizophrenia are so peculiar, in fact, that they have led Torrey and a growing number of other scientists to abandon the traditional explanations of the disease and embrace a startling alternative. Schizophrenia, they say, does not begin as a psychological disease. Schizophrenia begins with an infection.

The idea has sparked skepticism, but after decades of hunting, Torrey and his colleagues think they have finally found the infectious agent. You might call it an insanity virus. If Torrey is right, the culprit that triggers a lifetime of hallucinations—that tore apart the lives of writer Jack Kerouac, mathematician John Nash, and millions of others—is a virus that all of us carry in our bodies. “Some people laugh about the infection hypothesis,” says Urs Meyer, a neuroimmunologist at the Swiss Federal Institute of Technology in Zurich. “But the impact that it has on researchers is much, much, much more than it was five years ago. And my prediction would be that it will gain even more impact in the future.”

The implications are enormous. Torrey, Meyer, and others hold out hope that they can address the root cause of schizophrenia, perhaps even decades before the delusions begin. The first clinical trials of drug treatments are already under way. The results could lead to meaningful new treatments not only for schizophrenia but also for bipolar disorder and multiple sclerosis. Beyond that, the insanity virus (if such it proves) may challenge our basic views of human evolution, blurring the line between “us” and “them,” between pathogen and host.

Read the whole article.