Sunday, August 31, 2014

Epigenetics, Stress, and Their Potential Impact on Brain Network Function: A Focus on the Schizophrenia Diatheses


The current thinking on the etiology of schizophrenia is the diatheses-stress model, which suggests that a genetic vulnerability (diatheses) is triggered by environmental stress (the leading candidates after birth are neglect and/or sexual and physical abuse) and develops into schizophrenia at some point in the person's life (generally between 15-35).

Recent findings suggest there are at least 108 genes associated with schizophrenia, which only shows the complexity of this particular illness (with at least 108 genes playing a role, the possible combinations of genes either turned or off to produce the symptoms of the disease are staggering).

Further, this topic is confounded by the evidence that genes involved in schizophrenia are also involved in bipolar disorder and alcoholism, another gene links schizophrenia to cannabis addiction, still others link schizophrenia to anxiety disorders or depression/mood disorders and suicide, or that a combination of a particular virus in the mother and a specific gene variant in the child, not to mention the oft reported links between schizophrenia and creativity (often attributed to defective genes in the dopaminergic system) [Richards, R. (2000-2001). Creativity and the Schizophrenia Spectrum: More and More Interesting. Creativity Research Journal; 13(1): 111–132].

This article extends the considerable evidence for stress-related triggers of genetic vulnerabilities in the epigenetic etiology of schizophrenia.

Full Citation: 
Diwadkar VA, Bustamante A, Rai H and Uddin M. (2014, Jun 24). Epigenetics, stress, and their potential impact on brain network function: a focus on the schizophrenia diatheses. Frontiers in Psychiatry; 5:71. doi: 10.3389/fpsyt.2014.00071

Epigenetics, stress, and their potential impact on brain network function: a focus on the schizophrenia diatheses


Vaibhav A. Diwadkar[1], Angela Bustamante [2], Harinder Rai [1] and Monica Uddin [1,2]
1. Department of Psychiatry and Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, USA
2. Center for Molecular Medicine and Genetics, Wayne State University School of Medicine, Detroit, MI, USA
The recent sociodevelopmental cognitive model of schizophrenia/psychosis is a highly influential and compelling compendium of research findings. Here, we present logical extensions to this model incorporating ideas drawn from epigenetic mediation of psychiatric disease, and the plausible effects of epigenetics on the emergence of brain network function and dysfunction in adolescence. We discuss how gene–environment interactions, effected by epigenetic mechanisms, might in particular mediate the stress response (itself heavily implicated in the emergence of schizophrenia). Next, we discuss the plausible relevance of this framework for adolescent genetic risk populations, a risk group characterized by vexing and difficult-to-explain heterogeneity. We then discuss how exploring relationships between epigenetics and brain network dysfunction (a strongly validated finding in risk populations) can enhance understanding of the relationship between stress, epigenetics, and functional neurobiology, and the relevance of this relationship for the eventual emergence of schizophrenia/psychosis. We suggest that these considerations can expand the impact of models such as the sociodevelopmental cognitive model, increasing their explanatory reach. Ultimately, integration of these lines of research may enhance efforts of early identification, intervention, and treatment in adolescents at-risk for schizophrenia.

Introduction

Schizophrenia remains the most profoundly debilitating of psychiatric conditions (1, 2). General theories have struggled to capture the complexity of the disorder: genetic polymorphisms (3), neurodevelopment (4), and altered neurotransmission [dopamine (DA) and glutamate] (5, 6) have all being proposed as mediating factors in its emergence. A recently proposed “sociodevelopmental cognitive model” (7) has made compelling additions to the discourse on schizophrenia, with a specific emphasis on psychosis. A factorial combination of genetic and neurodevelopmental effects sensitize the DA system in early life. The disordered sensitivity subsequently leads to a disordered stress response that is further amplified by misattributed salience and paranoia. This cascading and recursive series of events eventually leads to the entrenchment of psychosis (and schizophrenia), explaining the life-long nature of the illness. This model is uniquely important because it integrates environmental, genetic, developmental, and molecular mechanisms (all converging on dysregulated DA release), providing a synthesis for several multi-disciplinary research agendas. Here, we attempt an incremental contribution to this synthesis suggesting that an expansion of this model may help elucidate the following:
(a) How do gene–environment interactions, effected by epigenetic mechanisms, mediate the stress response? The role of epigenetic mechanisms may be crucial in understanding why certain individuals at genetic risk eventually convert to schizophrenia but others with similar genetic vulnerability do not.

(b) In this context, the vexing problem of specific genetic at-risk populations is considered. Specifically, adolescents with one or both of whose parents have a diagnosis of schizophrenia form a “perfect storm” of genetic and neurodevelopmental contributors to risk for schizophrenia. These individuals present with extensive pre-morbid cognitive deficits (8) and sub-threshold clinical symptoms (9), yet a majority of them do not appear to develop the disorder. Whereas unexplained neurodevelopmental variation and resilience may explain this (10), we suggest that epigenetic mediation, particularly of genes mediating the stress response in adolescence, may explain some of this uncharacterized variance.

(c) Finally, we note the vast evidence of functioning brain network disruptions in schizophrenia, and the fact that these disruptions are now being characterized in at-risk populations, including children of patients, and suggest that epigenetic effects may mediate the shaping of functioning brain networks in the adolescent risk state, resulting in a highly variable and (currently) unpredictable pattern of conversion to psychosis (hence explaining the difficulty in estimating incidence rates of schizophrenia in at-risk groups).
In short, the proposed addendum motivates the role of epigenetics in the schizophrenia diathesis, the (potentially crucial) role of epigenetics in setting gene-expression levels that mediate the stress response, and ultimate causal (though presently unproven) effect on developing brain networks that sub-serve many of the cognitive functions impaired in schizophrenia. We note at the outset, that the proposed extensions remain speculative, yet seek to account for the relative under-representation of epigenetic considerations in schizophrenia-related research to date. In fact, epigenetics may provide a more proximate mediator of neuronal and behavioral effects than changes in the DNA sequence, and in turn these neuronal alterations may predispose individuals to schizophrenia, a question that has received comprehensive coverage in a recent canonical review (11). Moreover, the proposed additions also provide a prospective research impetus for studying particular sub-groups such as children of schizophrenia patients, a group that provides a particularly unique intersection of genetic risk, altered neurodevelopment, and environmental contributions (1214). Finally, the notion of stress reactivity impacting brain network function is a particular extension of the seminal concept of “allostatic load” (15, 16), morphologic degeneration as a response to repeated adaptive responses to stress.

Genetics, Development, Environment: An Array of Interactions

Schizophrenia is an “epigenetic puzzle” (17). Apart from the rare variant of the illness that is childhood onset schizophrenia (18), the typical manifestations of schizophrenia occur in late adolescence and early adulthood (1). This relatively late onset suggests that a seemingly intractable array of interactions between genetically endowed vulnerability, and environmental effects may amplify genetic predisposition, leading to post-natal effects on brain plasticity and development in the critical adolescent period (2, 19). The role of genes in mediating the emergence of the disorder is likely to be extremely complex. After all, genes do not code for complex psychiatric disorders but for biological processes (20). Thus, dysfunctional genetic expression is likely to lead to dysfunctional biological processes, with psychiatric disorders an emergent phenomenon in this causal pathway (20, 21). Moreover, the lack of complete concordance even in monozygotic twins (22, 23), suggests that genes primarily confer vulnerability to the illness and that other factors that mediate gene-expression during pre- and post-natal developmental, life span, and environmental effects play a significant role in the transition to the illness.

Several proximate environmental factors may be highly relevant as noted in the sociodevelopmental cognitive model. Stress – narrowly defined as a real or employed threat to homeostasis (24) – assumes particular importance, primarily because adolescence is a period of dynamic stress both in terms of substantive neurodevelopmental turnover (25), and environmental influence (26). Repeated stress exposure in particular during critical developmental periods exerts untenable biophysical costs. These costs typically referred to as allostatic load, increase vulnerability for somatic disease (27), and notably exert tangible biological effects. For example, glucocorticoid elevations that result from chronic stress have been associated with medial temporal lobe atrophy across multiple disorders including mood disorders, post-traumatic stress disorder, and schizophrenia (2830). Beyond medial temporal lobe regional atrophy, the documented molecular effects in the prefrontal cortex are suspected to ultimately impact frontal–striatal brain networks (31, 32). Elevated DA release during acute stress (33) adversely affects prefrontal pyramidal cells leading to a series of degenerative molecular events. The resultant dendritic spine loss in the infra-granular prefrontal cortex results in reductions in prefrontal-based network connectivity, particularly on prefrontal efferent pathways (34). These molecular effects are likely to have mesoscopic expressions; among them disordered prefrontal cortex related brain network function and organization that are hallmarks of schizophrenia (3, 3537).

Stress and the Risk State for Schizophrenia

The risk state for schizophrenia offers a powerful framework for synthesizing multiple theoretical constructs of the disease (38), and disordered stress reactivity may play a key role in amplifying disposition for psychosis in the risk state (39). A critical challenge for high-risk research is navigating the relationship between multiple (and potentially non- or partially overlapping) risk groups each with different etiologies and defined based on different criteria (40). Here we consider prodromal subjects (4146) in whom the role of stress has been heavily assessed, separately from adolescents with a genetic history of schizophrenia (including twins discordant for the illness and offspring of patients). The role of stress in the latter groups is relatively understudied. We note that the distinction does not imply exclusivity but rather criteria used to identify risk. Prodromal or clinical high-risk subjects (also on occasion referred to as “ultra high-risk”) are classified as such because they show non-specific yet considerably advanced clinical symptoms (47). Rates of conversion to psychosis within a short period after the emergence of clinical symptoms are high (estimates at 35%) (48). Genetic high-risk groups are identified typically on account of a family history of the illness itself; that is, not using clinical criteria. However, genetic high-risk groups may present with prodromal symptoms, hence these groups are not exclusive.

We will ultimately seek to drive our ideas in the direction of genetic risk in adolescence, largely because the prodromal question is heavily addressed in the sociodevelopmental model, whereas adolescent genetic risk is not. The adolescent genetic risk state presents a particularly vexing challenge, with substantial heterogeneity, and relative low rates of conversion to psychosis (9). The early identification of individuals who are likely to convert from the genetic risk state to actual schizophrenia (or psychosis?) thus remains a key issue to be addressed by future research efforts, as we propose here.

Prodromal subjects (sometimes referred to as “clinical high-risk”) present with a variety of symptoms that do not specifically warrant a diagnosis of schizophrenia, but include paranoia and impairment in social function. In general, prodromal patients have high rates of conversion to schizophrenia itself (48). For instance, multiple studies suggest that the average 12-month conversion rate in ultra high-risk samples not receiving any special anti-psychotic treatment is between 35 and 38% (48, 49). That a significant percentage of these individuals convert to psychosis is unsurprising because as noted, the prodromal state consists of highly advanced stage of clinical symptoms. Thus, these relatively non-specific symptoms that lead, and predict the presentation of the illness itself (38, 48, 50, 51) are considered the best clinical predictor of schizophrenia itself. Impaired neurobiology of the prodromal state is also relatively well understood: subjects are characterized by profound deficits in brain structure that are typically intermediate between healthy controls, and those observed in patients. Recent fMRI studies indicate substantive deficits in regional and brain network interactions (5254) including frontal–striatal and frontal–limbic; cognitive and social neuroscience has established a crucial role for these networks in sub-serving basic mechanisms of memory, attention, and emotion. Heightened stress reactivity itself may be exacerbated by the presence of sub-threshold symptoms. For instance, prodromal subjects indicate heightened sensitivity to inter-personal interaction, an indirect measure of heightened stress (55), and a significant percent of prodromal subjects who have experienced trauma in their lives convert to psychosis (41). As noted, DA synthesis is increased in prodromal subjects, and the degree of synthesis is positively associated with the severity of sub-threshold clinical symptoms (56). Moreover, impaired stress sensitivity is also associated with a wide range of prodromal symptoms (44). The role of stress sensitivity, the hypothalamic–pituitary–adrenal (HPA) axis, and its impact on brain structures, has been heavily treated in the empirical and theoretical literature (43, 45, 5759).

In contrast to the prodromal state, which includes individuals with a degree of existing symptoms, the genetic high-risk state encompasses individuals who are defined by having one (or more) parent(s) with schizophrenia, and who themselves may or may not evince symptoms of the disorder. The genetic high-risk state constitutes a partial complement of the clinical high-risk or prodromal state (these samples are often “enriched” by subjects with a family history of schizophrenia or psychosis providing overlap) (60). Genetic distance from a schizophrenia patient is a strong predictor of risk for the disease, and of the degree of biological impairments including brain structure, function, and behavior (61, 62). For example, children of schizophrenia patients being reared by the ill parent constitute a very particular and enigmatic high-risk sub-group (9, 13). These individuals have a genetic loading for the disease, but are also likely exposed to increased environmental stressors by virtue of being raised by their ill parent. Unlike with prodromal patients, conversion to psychosis in genetic high-risk groups is variable and lower.

Three principle longitudinal genetic high-risk studies are informative regarding lifetime incidence of schizophrenia in these groups. Between them, the New York (63), the Copenhagen high-risk projects (64), and a notable Israeli study (65) have provided evidence of lifetime incidences of narrowly defined schizophrenia at between 8 and 21%. While low, these rates constitute significantly elevated incidence rates relative to the sporadic incidence in the population (~1–2%). However, these rates are still notably lower than conversion rates in prodromal populations, a discrepancy that is somewhat surprising because the developmental psychopathology that characterizes prodromal patients is the very same one that is in play in adolescent high-risk subjects (45, 46). Subjects at genetic risk also show increased HPA axis sensitivity (59, 66), similar to what is observed in prodromal subjects, though the relationship to regional measures of brain integrity (e.g., pituitary size), is highly variable, and perhaps not informative as a biomarker (67). Heterogeneity is a cardinal characteristic of genetic risk groups (68, 69). Significant percentages of these subjects show attention deficits, working memory impairment, emotion dysregulation, and sub-threshold symptoms including negative symptoms (9, 7075). Notably each of these cognitive, emotional, and clinical domains is highly impacted by stress sensitivity in adolescence (76, 77). Adolescent risk subjects also present with increased frequency of sub-threshold clinical symptoms including schizotypy and both positive and negative symptoms such as anhedonia (7880), some of which have been associated with perceived stress (81, 82).

Understanding of altered DA synthesis in genetic risk groups is limited. A recent study in twins discordant for schizophrenia showed no increase in the elevation of striatal DA synthesis in the healthy twin (83) though the age range was well past the typical age of onset of the illness, and the healthy twin must retrospectively be classified as “low risk.” It is plausible the elevated striatal DA is not a marker of genetic risk per se, but might distinguish between adolescent sub-groups. Given that animal models and human studies have been highly informative in elucidating the impact of stress on neurobiology (32, 84), it is plausible that these effects might be quantifiable in neuroimaging data derived from such models in the context of risk for schizophrenia.

Brain Network Dysfunction in the Adolescent Risk State for Schizophrenia

The origins of psychiatric disorders lie in adolescence (85, 86), a developmental stage characterized by a unique set of vulnerabilities, where highly dynamic neurodevelopmental processes intersect with increasing environmental stressors (26, 87). The idea of “three-hits” in schizophrenia, which includes pre-natal insults (e.g., obstetric complications, exposure to infections in utero), neurodevelopmental processes and disease-related degeneration, predicts the emergence of reliable and identifiable abnormalities through the life span (10, 88, 89). Notably, the period from birth to early adulthood is characterized by significant potential for epigenetic dysfunction that can increase symptom severity, beginning with the emergence of sub-threshold symptoms in adolescence, and culminating (in some individuals) in psychotic symptoms in young adulthood (11). Moreover, brain network development remains highly tumultuous in this period and disordered brain network dynamics are likely to be a cardinal biological characteristic in adolescents at genetic risk for the illness (13).

Disordered frontal–striatal and frontal–limbic brain network interactions, a defining characteristic of schizophrenia (90, 91), are increasingly established in the adolescent genetic risk state. These interactions are well-understood for working memory and sustained attention, both domains particularly associated with these regions (92), with risk for schizophrenia (70), and with DA (93, 94). During working memory, adolescents at genetic risk for schizophrenia show inefficient regional responses as well as network interactions in frontal and striatal regions. During working memory-related recall, at-risk subjects hyper-activate frontal–striatal regions, specifically for correctly recalled items (95), an effect highly consistent with what has been documented in schizophrenia itself (96, 97) and with large studies assessing the relationship between genetic risk and prefrontal efficiency (98).

More impressively, network interactions are also inefficient. For instance, the degree of modulation by the dorsal anterior cingulate, the brain’s principle “cognitive control” structure (99), during working memory is significantly increased in at-risk subjects (100). Thus, when performing the task at levels comparable to typical control subjects, control-related “afferent signaling” from the dorsal anterior cingulate cortex is aberrantly increased in adolescents at genetic risk. This evidence of inefficient pair-wise network interactions is highly revealing of “dysconnection” in the adolescent risk state. Similar results have been observed in the domain of sustained attention, where again, frontal–striatal interactions are impaired in the risk state (80, 101). Genetic high-risk subjects are also characterized by disordered “effective connectivity” estimated from fMRI signals. Effective connectivity is noted as the most parsimonious “circuit diagram” replicating the observed dynamic relationships between acquired biological signals (102). Recent evidence suggests reduced effective thalamocortical (54) and frontal–limbic (103) effective connectivity in genetic risk groups. These and other studies establish a pattern of general brain network dysfunction in adolescents at genetic risk for schizophrenia, suggesting that dysfunction in cortical networks is a plausible “end-point” in a cascade of genetic and neurodevelopmental events.

However, this story on brain networks is incomplete, because these high-risk groups present with considerable heterogeneity in sub-clinical symptoms, and recent evidence suggests that this heterogeneity predicts fMRI responses. For example, high-risk subjects with sub-threshold negative symptoms show attenuated responses to rewarding social stimuli, particularly in regions of the limbic system, including the amygdala and the ventral prefrontal cortex (75). This pattern of responses is in fact similar to those seen in patients with frank depression, and suggests additional compelling evidence in support of stress mediating the emergence of negative symptoms that in turn affect functioning brain networks (44, 104107).

Pathways and Epigenetic Mediation

Psychological stress is a major mediator of externally experienced (i.e., environmental) events, with relevance to both the central and peripheral nervous systems (108). Stress induces the release of corticotrophin releasing factor that activates the HPA axis to produce cortisol, and the sympathetic nervous system to produce norepinephrine and epinephrine. In some individuals, the initiation of an acute, adaptive “fight-or-flight” response in the face of threatening events becomes persistent and pathological. How this failure to return to homeostasis occurs in only a subset of individuals, resulting in a psychopathological state, remains to be fully elucidated. Stress is a clear risk factor for schizophrenia (109), and the biologic mechanisms linking stress, schizophrenia, and risk for schizophrenia are still being comprehensively characterized.

One candidate factor that may be a mediator in this causal chain is epigenetics, a field of increasing interest in mental illness, including risk for schizophrenia (110112). Epigenetics, a term proposed nearly 70 years ago by Conrad Waddington, was born out of the terms “genetics” and “epigenesist,” narrowly referring to the study of causal relationships between genes and their phenotypic effects (113), but more recently associated with changes in gene activity independent of the DNA sequence, that may or may not be heritable, and that may also be modified through the life span. Epigenetic factors include DNA methylation which in vertebrates typically involves the addition of a methyl group to cytosine where cytosine and guanine occur on the same DNA strand; histone modifications, involving the addition (or removal) of chemical groups to the core proteins around which DNA is wound; and non-coding RNAs such as microRNAs (miRNAs), which bind to mRNAs to suppress gene-expression posttranscriptionally. Among these several mechanisms, DNA methylation is the most stable and the best studied within the context of psychiatric disorders, including schizophrenia, although emerging work suggests that miRNAs, which target multiple mRNA transcripts, serve as master regulators of developmental gene-expression patterns, and are responsive to stress (114), play an etiologic role in SCZ (115).

As mounting evidence fails to conclusively link individual genes to specific mental illnesses (116), epigenetic effects during critical developmental periods assumes increasing significance (11). In such a model, genetic etiology may be expressed in differentiated psychiatric phenotypes because epigenetic factors changing in response to external experiences vary across these phenotypes. Indeed, as potential regulators of DNA accessibility and activity, epigenetic factors through influences on gene-expression, offer a mechanism by which the environment – and, in particular, one’s response to the environment – can moderate the effects of genes (117). In the context of schizophrenia, models suggest that epigenetic deregulation of gene-expression at specific loci is highly unlikely, again given the highly polygenic nature of the illness. Rather, epigenetic effects may progressively impact gene-expression in salient neurodevelopmental gene networks during critical developmental periods, in response to environmental inputs (11). For example, the loss of synchronal activity of GABAergic interneurons in the prefrontal cortex might result from environmental stressors such as cannabis (118), which interact with the expression of vulnerability genes such as GAD1 that control GABA synthesis (119).

Previous work has shown that glucocorticoids (GC) such as cortisol induce epigenetic, DNA methylation changes in HPA axis genes (e.g., FK506 binding protein 5, FKBP5), both in neuronal [i.e., hippocampal (120, 121)] and peripheral [i.e., blood (121123)] tissues, as well as in additional cells relevant to the HPA axis [i.e., pituitary cells (120)]. Moreover, GC-induced DNA methylation changes persist long after cessation of GC exposure (121123), suggesting that stress-induced GC cascades have long lasting consequences for HPA axis function that may be accompanied by behavioral (mal)adaptations (121, 124).

These epigenetic mechanisms are of relevance to the previously noted role of stress as a major contributor in the emergence of cognitive impairments in first episode psychosis, in particular resulting from high stress sensitivity in this group (125). Stress sensitivity, a tendency to experience negative affect in response to negative environmental events (126), is a well-established risk factor for psychopathology (127), including schizophrenia (44, 128). This role has been clarified in recent work using experience sampling methods (ESM), where participants in prospective studies note their life experiences in real time. Using a twin-study design in a large longitudinal cohort of mono- and dizygotic twins, participants recorded multiple mood and daily life events with stress sensitivity defined as an increase in recorded negative affect to event unpleasantness. Notably, stress sensitivity showed relatively little genetic mediation and was almost exclusively environmentally determined (126). Whereas non-ESM investigations and some animal studies in models of schizophrenia (129) suggest a genetic, heritable component, the majority of variance still appears to be environmentally determined (130, 131). Thus, stress sensitivity is a labile characteristic that can change in response to environmental experiences to alter risk for psychopathology. Tracking epigenetic changes in stress-sensitive genes of the HPA axis, as well as additional stress-sensitive genes that interact with the HPA axis, might enable identification of a biologic mechanism that mediates risk for, and the emergence, of schizophrenia. Indeed, strong signatures of gene-expression differences in stress-related genes have been recently identified in post-mortem brain tissue in a manner that distinguishes schizophrenia patients from controls and from individuals with other psychiatric disorders (132). Many of these are likely accompanied by DNA methylation differences, as has been reported by studies performed on related genes in animal models (133).

Emerging evidence suggests that brain endophenotypes, as well as psychiatric outcomes, can be predicted by peripheral DNA methylation measurements. Notably, genes belonging to the HPA axis, as well as DA- and serotonin (5HT)-related genes, whose products interact those of the HPA axis, shape the stress response (109, 134, 135) and are known to show psychopathology-associated differences in blood (136138). For example, recent work has shown that leukocyte DNA methylation in the serotonin transporter locus (SLC6A4) was higher among adult males who had experienced high childhood-limited physical aggression; moreover SLC6A4 DNA methylation was negatively correlated with serotonin synthesis in the orbitofrontal cortex, as measured by positron emission tomography (PET) (139). Similarly, leukocyte DNA methylation in the promoter region of the MAOA gene – whose product metabolizes monoamines such as serotonin and DA, is negatively associated with brain MAOA levels as measured by PET in healthy male adults (140). Structural imaging data analyses in relation to the FKBP5 locus discussed above have identified a negative association between DNA methylation in peripheral blood and volume of the right (but not left) hippocampal head (121). This observation is particularly noteworthy, as it suggests that lower FKBP5 DNA methylation in peripheral blood is associated not only with altered stress sensitivity (as indexed by a glucocorticoid receptor sensitivity assay within the same study), but also with structural brain differences in a brain region known to mediate stress reactivity (121). Finally, investigation of the COMT locus, a gene encoding an enzyme critical for degradation of DA and other catecholamines, has shown that, among Val/Val genotypes, subjects (all healthy adult males) with higher stress scores have reduced DNA methylation at a CpG site located in the promoter region of the gene (141). Moreover, DNA methylation at this site was positively correlated with working memory accuracy, with greater methylation predicting a greater percentage of correct responses (with results again limited to analysis of the Val/Val subjects); furthermore, fMRI demonstrated a negative correlation between DNA methylation at this site and bilateral PFC activity during the working memory task (141). Additional analyses showed an interaction between methylation and stress scores on bilateral prefrontal activity during working memory, indicating that greater stress, when combined with lower methylation, are associated with greater activity (141).

This last finding is especially noteworthy, because whereas stress–DNA methylation interactions have been reported for other stress-sensitive loci (142), the referenced study represents a direct demonstration of a heterogeneity in stress load that, when moderated by DNA methylation, impacts working memory. Clearly, greater stress and lower COMT DNA methylation correlate with reduced efficiency of prefrontal activity (141). This mechanism may be explained by the fact that disordered stress responses following prolonged stress exposure induces hyper-stimulation of prefrontal DA receptors (143, 144) that may be mediated by prefrontal glutamate neurotransmission (145). This hyper-stimulation in turn appears to affect the receptive field properties of prefrontal neurons during working memory (94). Patterns of network dysfunction in the genetic risk state may reflect brain network sensitivity to stress in the “pre-morbid” risk state that may be under as yet undiscovered epigenetic control. Thus, much of the unaccounted variance in schizophrenia previously construed as genetic, may likely be epigenetic (11, 146). Is it possible to assess epigenetic factors mediating the stress response in risk for schizophrenia, and the effects on brain network function?

The influence of stress on DNA methylation on HPA axis genes in blood is well established (121123). Indeed, blood disperses GC hormones produced by the HPA axis throughout the body, which then regulates gene-expression in virtually all cell types (108). Thus, the broad reach of HPA axis activity, together with evidence that blood-derived DNA methylation in HPA axis genes is altered through stress (121, 147), provides ample biologic and clinical plausibility to our proposed hypothesis that stress sensitivity, measured in the periphery, can serve as an important – perhaps even predictive – index of transition from the genetic risk state into actual schizophrenia. Importantly, although GCs also influence DNA methylation and gene-expression in the CNS and neuronal cells (120, 121), our model does not suppose that this epigenetic measure in CNS tissues will match those in the periphery; rather, it proposes that DNA methylation in stress-sensitive, HPA-axis genes in the periphery will index the known dysregulation in brain function and connectivity in stress-sensitive regions of the brain among adolescents at genetic risk. Figure 1 provides an overview of an integrative approach and builds on previous considerations of epigenetic mechanisms in developmental psychopathology (11).
FIGURE 1
http://www.frontiersin.org/files/Articles/92099/fpsyt-05-00071-HTML/image_m/fpsyt-05-00071-g001.jpg

Figure 1. Overview of working model. HPA axis reactivity is determined both by intrinsic genetic factors and stressful environmental (including pre-natal) experiences. Stressful exposures induce a glucocorticoid (i.e., cortisol) cascade that then induces DNAm changes in HPA axis genes in the blood. These changes are expected to be more pronounced in at-risk adolescents, particularly those who may already exhibit sub-clinical psychopathology, such as negative symptoms. Risk-associated, blood-derived DNAm differences in HPA axis and related stress sensitivity genes are hypothesized to index metrics of brain function including activation patterns and effective connectivity in stress-sensitive brain regions. The activation patterns are reproduced from Diwadkar (13) and reflect engagement of an extended face-processing network in controls and high-risk subjects during a continuous emotion-processing task. These activations are most likely generated by complex dynamic interactions between brain networks that are represented in the figure below. The figure presents a putative combination of intrinsic connections between brain regions activated during such a task, and the contextual modulation of specific intrinsic connections by dynamic task elements. The role of effective connectivity analyses is to recover and estimate parameter values for intrinsic and modulatory connections that a) may be different in the diseased or risk state and b) may plausibly be under epigenetic mediation. The figure is adapted and reprinted from: Mehta and Binder (124), with permission from Elsevier; adapted by permission from Macmillan Publishers Ltd.: Frontiers in Neuropsychiatric Imaging and Stimulation (108). Reproduced with permission, Copyright © (2012) American Medical Association. All rights reserved.
Existing data support the hypothesis that schizophrenia-associated DNA methylation differences exist in stress-sensitive genes. Table 1 summarizes results from existing genome-scale studies that have been conducted in blood and brain in relation to schizophrenia, focusing specifically on the HPA axis genes involved in the glucocorticoid receptor complex (148), as well as representative DA- and serotonin-related genes, and genes that produce DNA methylation and have been shown to be responsive to glucocorticoid induction in both the brain and periphery [i.e., DNA methyltransferase 1, DNMT1; (120)]. As can be seen from the table, all of the genes show SCZ-related DNA methylation differences in brain derived tissue (149), and the majority (four of five) of GC-receptor chaperone complex genes show DNA methylation differences in the blood as well. Although we have limited our analysis to genome-wide studies of DNA methylation, additional candidate gene studies have linked stress-sensitive mental disorders to methylation differences in blood (142, 150, 151), suggesting that similar findings may be forthcoming for schizophrenia as additional studies are completed. Importantly, among these genes, some (but not all) have shown that DNA methylation levels can vary depending on local [e.g., Ref. (141)] or distal [e.g., Ref. (121)] DNA sequence variation – so-called “methQTLs” (methylation quantitative trait loci). Thus, as evidence accumulates regarding the existence of methQTLs, we note that analyses based on these proposed genes should take these into consideration.
TABLE 1  
http://www.frontiersin.org/files/Articles/92099/fpsyt-05-00071-HTML/image_m/fpsyt-05-00071-t001.jpg
Table 1. Summary of genome-wide studies reporting differential DNA methylationa (DM) within stress-sensitive genes in blood or brain.

Conclusion

Incorporating epigenetic considerations into the sociodevelopmental model might provide a particular powerful explanatory framework for understanding genetic risk in adolescence. Regressive pressures from a combination of fixed genetic vulnerability for schizophrenia and epigenetic effects during adolescence are most likely to impact the development of neuronal network profiles (155, 156). As we noted earlier, advances in the analyses of fMRI signals now permit the estimate of effective connectivity and dysconnectivity between healthy, clinical, and at-risk populations, providing a significant framework for exploring brain dysfunction using a priori hypothesis (157). A focus on frontal–striatal and frontal–limbic dysconnectivity may be particularly warranted. A disordered stress response may cleave apart frontal–striatal and frontal–limbic neuronal network profiles in high-risk adolescents, providing a convergence of biological markers across multiple levels (genetic, epigenetic, and brain networks). Here, we have proposed that increased stress sensitivity (which can be indexed in the periphery) can help to unpack the heterogeneity among individuals at genetic high-risk of SCZ when linked to a strongly validated finding in genetic risk populations, namely brain network dysfunction. This framework may help to identify, among individuals at high genetic risk for SCZ, a subset who are likely to go on to develop the disorder. Our focus on stress-relevant genes does not exclude the possibility that genes in other pathways (e.g., dopaminergic, serotonergic, glutamatergic) may also be important; indeed, this focus may be considered a limitation of the proposed hypothesis. However, we believe that our proposed framework is a logical starting point for merging central and peripheral indicators of the potential for SCZ among HRS individuals. This framework may help extend the sociodevelopmental cognitive model into the realm of high-risk research. The presence of non-specific, sub-threshold symptoms continues to remain a significant clinical challenge for disorders such as schizophrenia and bipolar disorder (38, 158). Early intervention strategies will be boosted if biological markers can be interlinked to identify ultra high-risk adolescents. Our intent is to motivate this search for biological convergence hoping that this may lead to psychosis prediction and, ultimately, prevention.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This work was supported by the National Association for Research on Schizophrenia and Depression (NARSAD, now Brain Behavior Research Fund; Vaibhav A. Diwadkar), the Prechter World Bipolar Foundation (Vaibhav A. Diwadkar), the Lyckaki Young Fund from the State of Michigan, and the Children’s Research Center of Michigan (Monica Uddin). The agencies played no role in the shaping of the ideas presented herein.


Couples as Socially Distributed Cognitive Systems: Remembering in Everyday Social and Material Contexts


I can see this - it makes a wee bit of sense. The authors focus in this piece on a review of the empirical research suggesting that social sharing of memories is one of the most mundane examples of distributed cognition.

Below is the introduction to the much longer paper, which can be read at the link in the title.

Memory Studies; 2014: 7(3):285–297
DOI: 10.1177/1750698014530619

Couples as socially distributed cognitive systems: Remembering in everyday social and material contexts

Celia B Harris, Amanda J Barnier, John Sutton. and Paul G Keil

Abstract
In everyday life remembering occurs within social contexts, and theories from a number of disciplines predict cognitive and social benefits of shared remembering. Recent debates have revolved around the possibility that cognition can be distributed across individuals and material resources, as well as across groups of individuals. We review evidence from a maturing program of empirical research in which we adopted the lens of distributed cognition to gain new insights into the ways that remembering might be shared in groups. Across four studies, we examined shared remembering in intimate couples. We studied their collaboration on more simple memory tasks as well as their conversations about shared past experiences. We also asked them about their everyday memory compensation strategies in order to investigate the complex ways that couples may coordinate their material and interpersonal resources. We discuss our research in terms of the costs and benefits of shared remembering, features of the group and features of the remembering task that influence the outcomes of shared remembering, the cognitive and interpersonal functions of shared remembering, and the interaction between social and material resources. More broadly, this interdisciplinary research program suggests the potential for empirical psychology research to contribute to ongoing interdisciplinary discussions of distributed cognition.


Socially distributed remembering: theoretical and empirical background


Remembering the past plays a crucial role in our lives, our identities, our plans, and our social relationships (Harris et al., 2013b), and the fact that we frequently talk about the past with others has important consequences for the way we remember (Campbell, 2003; Harris et al., 2008, 2010; Pasupathi, 2001; Sutton et al., 2010; Weldon, 2000).In the current article, we apply the theoretical framework of distributed cognition (Barnier et al., 2008; Hutchins, 1995; Sutton, 2006) to group remembering. As we have argued elsewhere (Barnier et al., 2008; Sutton et al., 2010), a distributed cognition framework provides explanatory power for complex social memory phenomena; it drives novel research questions, new methods, and empirically testable hypotheses. In the current article, we update this argument by presenting findings from a maturing program of empirical research on shared remembering in couples. 

Distributed cognition: definitions

The distributed cognition framework suggests that cognitive states and processes are sometimes distributed, such that neural and bodily resources couple in coordinated ways with material or social resources to accomplish cognitive tasks (Barnier et al., 2008; Clark, 1997). According to this view, external resources can become parts both of occurrent cognitive processes and of enduring integrated cognitive systems: ‘When parts of the environment are coupled to the brain in the right way, they become parts of the mind’ (Chalmers, 2008: 1; see also Sutton, 2010). This definition begs the question of what the ‘right way’ is for coupling to occur. Clark and Chalmers (1998) pro- posed the following criteria:
1. That the resource be reliably available and typically invoked …
2. That any information thus retrieved be more-or-less automatically endorsed …
3. That information contained in the resource should be easily accessible as and when required. (Clark, 2010: 6–7)
While debate continues to refine these conditions (Sterelny, 2010; Sutton, 2010; Sutton et al., 2010), we can usefully adopt them for the purposes of this exposition, to motivate and test against empirical research.

What kinds of cognitive tasks?

There are three compatible possibilities for the kinds of cognitive tasks that lend themselves to distribution across internal and external resources. First, cognitive distribution might enable the accomplishment of highly complex tasks that cannot be completed by an individual alone, such as navigating a ship (Hutchins, 1995). Second, cognitive distribution might enable individuals to accomplish tasks ‘better’ in some way, or more efficiently, or at least differently and with different outcomes from doing the tasks alone. Third, cognitive distribution might enable the maintenance of capacity to complete everyday tasks (which used to be done alone) as individual cognitive resources decline or fail. For instance, Clark and Chalmers (1998) described a thought experiment regarding ‘Otto’, a man with Alzheimer’s, whose notebook entries have literally become the con-tents of his memory. In observing strikingly similar real-world cases, Dennett (1996) noted that older individuals often ‘load their home environments with ultra-familiar landmarks, triggers for habits … Taking them out of their homes is literally separating them from large parts of their minds’ (see also Dahlbäck et al., 2013; Drayson and Clark, in press).

Socially distributed cognition

Cognitive distribution is arguably an everyday phenomenon, and the examples used to illustrate it are likewise everyday, like the cocktail waiter who relies on the shape of the glasses to remember ingredients in drinks, or an artist using a sketchpad (Clark, 1997; Van Leeuwen et al., 1999). Despite the field’s focus on material resources, distributed cognitive systems are likely to involve both material and social resources (Barnier et al., 2008; Sutton et al., 2010). In the current article, we review empirical research motivated by the view that social sharing of memories is one of the most mundane examples of distributed cognition (see also Barnier et al., 2008; Barnier, 2010).

We focus here on intimate couples remembering together. We have a number of reasons to expect that they are a particularly good example of the kinds of groups in which socially distributed cognition occurs (see also Wu et al., 2008). Adapting Clark and Chalmers’ (1998) criteria for considering objects as part of cognition, Tollefsen (2006) suggested that Person A can be incorporated into Person B’s cognitive processing under the following conditions: (1) if Person A is avail-able and typically invoked; (2) if Person B accepts Person A’s information without question; (3) if Person A is readily accessible by Person B; and (4) if information stored by Person A was endorsed by Person B at some point. Long-married couples who frequently discuss their past and future across their lives may meet these criteria (see also Sutton et al., 2010; Tollefsen, 2006). Put another way, couples are ‘persisting integrated systems’ (cf. Rupert, 2010; see also Wegner, 1987; Wegner et al., 1985).

Making couples the unit of analysis can yield insights not available when studying individuals (see also Hinsz et al., 1997). That is, groups such as couples may exhibit emergence when they remember together; meaning the group product is different from the aggregation of individual memories (see also Theiner, 2013; Theiner and O’Connor, 2010). Such emergence may be positive (such as the generation of new information) or negative (such as the introduction of errors). Wegner’s (1987) Transactive Memory theory predicts benefits of shared remembering as one kind of emergence: ‘group memory structures develop and become capable of memory feats far beyond those that might be accomplished by any individual’ (Wegner, 1995: 319).

Shared remembering in experimental psychology

In cognitive psychology, the collaborative recall paradigm was developed to measure the impact of remembering with others (Weldon and Bellinger, 1997). Using this method, the memory output of a group is compared to the pooled or aggregated (non-redundant) output of the same number of individuals remembering alone (see Basden et al., 2000; Harris et al., 2008; Rajaram and Pereira-Pasarin, 2010). This comparison is useful for considering whether groups show the kind of emergent properties that would be predicted by conceptualising them as distributed cognitive systems, since it indexes whether the recall of a collaborative group is quantitatively different from the sum of its parts.


Collaborative groups reliably remember less than aggregated groups; that is, they show collaborative inhibition. This ‘cost’ of collaboration has been demonstrated for materials such as word lists, stories, pictures and historical facts (see Harris et al., 2008). Typically, groups of strangers are tested (Rajaram and Pereira-Pasarin, 2010), although groups of friends also show collaborative inhibition (Harris et al., 2013a). However, a study by Meade et al. (2009) found that expert pilots, who are trained to communicate efficiently, reversed the typical effect and showed benefits of collaboration – collaborative facilitation – when remembering aviation-relevant information, such that collaborative groups remembered more than aggregated groups.

A handful of studies suggest that intimate couples may also benefit from remembering together. For instance, Ross et al. (2004) found that older couples made fewer memory errors on a shopping list task when they collaborated, whereas Johansson et al. (2005) found that a subset of older couples – those high on division of responsibility and on agreement about expertise – were relatively less impaired by collaboration. However, these studies have not reliably demonstrated the memory facilitation that we might expect, and a number of other studies have failed to find any benefits of shared remembering in couples at all (e.g. Gould et al., 2002).

In our studies, we extended the methodology of the standard collaborative recall paradigm to study shared remembering in its everyday social context. We focused on intimate couples – the kinds of groups who regularly remember together. We also focused on a range of memory tasks, from basic word lists to significant, shared autobiographical events. Finally, we focused on the communication and interaction during collaboration and other differences between couples (e.g. relationship intimacy). We examined whether the benefits of shared remembering, as suggested by a distributed cognition framework, may be identifiable in certain kinds of groups and for certain kinds of memories.

Saturday, August 30, 2014

In Praise Of Being Bored by Alva Noë

From NPR's 13.7 Cosmos and Culture blog, Alva Noë muses on the joys (which I would add are relative) to enjoying a lazy afternoon of watching a boring baseball game.

Personally, boredom is nearly unfathomable to me. I am seldom bored with so much to know in the world that I don't know. And baseball, yes, IS boring, on par with watching paint dry. So this will have to be one of the few times I must disagree with Noë.

In Praise Of Being Bored


by Alva Noë
August 29, 2014


Two young boys at a Cincinnati Reds game.
iStockphoto

When Bud Selig, baseball's long-serving commissioner, visited Oakland recently, he took the opportunity to bemoan the A's inadequate stadium and also to worry aloud about a topic that seems to loom large in the minds of many baseball people these days, namely, the increasingly slow pace of the game.

Indeed, the game has gotten slower over time.

A game today lasts, on average, more than 30 minutes longer than it did 30 years ago. I suspect the big culprit here is longer commercial breaks (between 30 and 40 minutes of a baseball game broadcast — the recent institution of instant replay may be a contributing factor). But the target of concern, as is so often the case, is the little guy — in this case, the players themselves. They're just playing too slowly, it is complained. Too much time elapses between pitches. Too many timeouts to adjust equipment or clothes.

Selig, in my opinion, is wrong on both counts. The A's have a great stadium. And baseball doesn't move too slowly.

As for the pace of the game, I am a bit surprised that MLB is concerned about this at all. Revenues are up, TV viewing is up, there are more teams than ever, and most of them are very rich. If you measure the sport's vitality in business terms, baseball, it would seem, has never been better. And if you visit the ballpark to watch a game, it's immediately clear that today's live baseball experience positively thrives on interruptions to the play. That's when you get to spend your money.

What exactly is the problem?

Don't say: "The game is too slow; it's getting boring."

As anyone who knows and loves baseball will tell you, baseball is boring. This is nothing new. Even at its most lively best, baseball is a game that unfolds at a walking pace, or at the pace of a relaxed conversation. When compared to the unstopping swarm dynamic of soccer or hockey, or the hustle and dance of basketball, baseball hardly even seems like a sport at all.

And this is what the great many of us who love baseball love about it. Baseball games aren't just long. There's no way of knowing how long they might be. A baseball game, like a good conversation, or a friendship, or a political controversy, has no fixed end. It takes however long it takes. As Selig observed, baseball is a game without a clock. And that's a good thing.

Selig also questions whether this kind of unstructured, open time is palatable in today's fast-paced world.

I say: God save us from today's ramped up, multi-interrupted, selfie-consumed, fast-paced world! We need to slow down. We need to turn off. We need to unplug. We need to start things and not know when they are going to end. We need evenings at the ballpark, evenings spent outside of real time.

What's so bad about being bored?

I found myself at a table with Europeans the other night. Inevitably, the topic turned to the relative merits of baseball and what they call "football." I had to restrain myself from expressing my irritation. It isn't that there isn't a boatload to be said about how these sports differ from each other. And it is certainly true that we love our sports and may find ourselves actively disliking the sports of others. For example, I admit that I found myself wanting to turn off the World Cup once they moved from flopping around on the floor in throes of pretend agony to actually biting each other.

But the thing is, arguing about sports is like arguing about foods. We like what we grew up with; kids around the world aren't soccer fans because, after having surveyed the world's sports, they chose soccer. And the same goes for Americans and baseball. We don't like our sports because they're great. They are great because we like them. Or, maybe, loving a sport — coming to understand it — lets you see the greatness that otherwise goes unwitnessed.

As for Selig's judgment on the A's stadium, I agree it's an old concrete and steel throwback to another era. True, there are no roller coasters, or hot tubs, or extensive gourmet food offerings. And yes, you can see the faded paint of the Raiders' gridiron running across the A's diamond.

But O.co field is a fabulous place to watch baseball. It may not be a top-of-the-line shopping mall and entertainment center, but it is a spacious, open, cathedral of baseball. It is a place where baseball happens and where you just may find, if you are lucky, that you have the opportunity to relax and get bored.

You can keep up with more of what Alva Noë is thinking on Facebook and on Twitter: @alvanoe

Alan Lightman - My Own Personal Nothingness

Physicist and novelist Alan Lightman is one of my favorite science authors and has been ever since I read his novel, Einstein's Dreams back in 1993. His most recent book (Jan, 2014) is The Accidental Universe: The World You Thought You Knew. [He is also the author of A Sense of the Mysterious: Science and the Human Spirit (2005).]

This cool essay comes from Nautilus magazine, Issue 16, Chapter 4, on the topic of Nothingness.


My Own Personal Nothingness



From a childhood hallucination to the halls of theoretical physics.



By Alan Lightman | Illustration By Gérard DuBois
August 28, 2014
“Nothing will come of nothing.”
(William Shakespeare, King Lear)

“Man is equally incapable of seeing the nothingness from which he emerges and the infinity in which he is engulfed.”
(Blaise Pascal, Pensées, The Misery of Man Without God)

“The… ‘lumniferous ether’ will prove to be superfluous as the view to be developed here will eliminate [the condition of] absolute rest in space.”
(Albert Einstein, On the Electrodynamics of Moving Bodies)
MY MOST VIVID encounter with Nothingness occurred in a remarkable experience I had as a child of 9 years old. It was a Sunday afternoon. I was standing alone in a bedroom of my home in Memphis Tennessee, gazing out the window at the empty street, listening to the faint sound of a train passing a great distance away, and suddenly I felt that I was looking at myself from outside my body. I was somewhere in the cosmos. For a brief few moments, I had the sensation of seeing my entire life, and indeed the life of the entire planet, as a brief flicker in a vast chasm of time, with an infinite span of time before my existence and an infinite span of time afterward. My fleeting sensation included infinite space. Without body or mind, I was somehow floating in the gargantuan stretch of space, far beyond the solar system and even the galaxy, space that stretched on and on and on. I felt myself to be a tiny speck, insignificant in a vast universe that cared nothing about me or any living beings and their little dots of existence, a universe that simply was. And I felt that everything I had experienced in my young life, the joy and the sadness, and everything that I would later experience, meant absolutely nothing in the grand scheme of things. It was a realization both liberating and terrifying at once. Then, the moment was over, and I was back in my body.

The strange hallucination lasted only a minute or so. I have never experienced it since. Although Nothingness would seem to exclude awareness along with the exclusion of everything else, awareness was part of that childhood experience, but not the usual awareness I would locate within the three pounds of gray matter in my head. It was a different kind of awareness. I am not religious, and I do not believe in the supernatural. I do not think for a minute that my mind actually left my body. But for a few moments I did experience a profound absence of the familiar surroundings and thoughts we create to anchor our lives. It was a kind of Nothingness.

TO UNDERSTAND anything, as Aristotle argued, we must understand what it is not, and Nothingness is the ultimate opposition to any thing. To understand matter, said the ancient Greeks, we must understand the “void,” or the absence of matter. Indeed, in the fifth century B.C., Leucippus argued that without the void there could be no motion because there would be no empty spaces for matter to move into. According to Buddhism, to understand our ego we must understand the ego-free state of “emptiness,” called śūnyatā. To understand the civilizing effects of society, we must understand the behavior of human beings removed from society, as William Golding so powerfully explored in his novel Lord of the Flies.

Following Aristotle, let me say what Nothingness is not. It is not a unique and absolute condition. Nothingness means different things in different contexts. From the perspective of life, Nothingness might mean death. To a physicist, it might mean the complete absence of matter and energy (an impossibility, as we will see), or even the absence of time and space. To a lover, Nothingness might mean the absence of the beloved. To a parent, it might mean the absence of children. To a painter, the absence of color. To a reader, a world without books. To a person impassioned with empathy, emotional numbness. To a theologian or philosopher like Pascal, Nothingness meant the timeless and spaceless infinity known only by God. When King Lear says to his daughter Cordelia, “Nothing will come of nothing,” he means that she will receive far less of his kingdom than her two fawning sisters unless she can express her boundless love for him. The second “nothing” refers to Cordelia’s silence contrasted with her sisters’ gushing adoration, while the first is her impending one-room shack compared to their opulent palaces.

Although Nothingness may have different meanings in different circumstances, I want to emphasize what is perhaps obvious: All of its meanings involve a comparison to a material thing or condition we know. That is, Nothingness is a relative concept. We cannot conceive of anything that has no relation to the material things, thoughts, and conditions of our existence. Sadness, by itself, has no meaning without reference to joy. Poverty is defined in terms of a minimum income and standard of living. The sensation of a full stomach exists in comparison to that of an empty one. The sensation of Nothingness I experienced as a child was a contrast to feeling centered in my body and in time.


 
The Commute: Alan Lightman en route to his summer home off the coast of Maine. Michael Segal

MY FIRST experience with Nothingness in the material world of science occurred when I was a graduate student in theoretical physics at the California Institute of Technology. In my second year, I took a formidable course with the title of Quantum Field Theory, which explained how all of space is filled up with “energy fields,” usually called just “fields” by physicists. There is a field for gravity and a field for electricity and magnetism, and so on. What we regard as physical “matter” is the excitation of the underlying fields. A key point is that according to the laws of quantum physics, all of these fields are constantly jittering a bit—it is an impossibility for a field to be completely dormant—and the jittering causes subatomic particles like electrons and their antiparticles, called positrons, to appear for a brief moment and then disappear again, even when there is no persistent matter. Physicists call a region of space with the lowest possible amount of energy in it the “vacuum.” But the vacuum cannot be free of fields. The fields necessarily permeate all space. And because they are constantly jittering, they are constantly producing matter and energy, at least for brief periods of time. Thus the “vacuum” in modern physics is not the void of the ancient Greeks. The void does not exist. Every cubic centimeter of space in the universe, no matter how empty it seems, is actually a chaotic circus of fluctuating fields and particles flickering in and out of existence on the subatomic scale. Thus, at the material level, there is no such thing as Nothingness.

Remarkably, the active nature of the “vacuum” has been observed in the lab. The principal example lies in the energies of electrons in hydrogen atoms, which can be measured to high accuracy by the light they emit. According to quantum mechanics, the electric and magnetic field of the vacuum is constantly producing short-lived pairs of electrons and positrons. These ghostlike particles pop out of the vacuum into being, enjoy their lives for about one-billionth of one-billionth of a second, and then disappear again.

In an isolated hydrogen atom, surrounded by seemingly empty space, the proton at the center of the atom draws the fleeting vacuum electrons toward it and repulses the vacuum positrons, causing its electrical charge to be slightly reduced. This reduction of the proton’s charge, in turn, slightly modifies the energy of the orbiting (non vacuum) electrons in a process called the Lamb shift, named after physicist Willis Lamb and first measured in 1947. The measured shift in energy is quite small, only three parts in 100 million. But it agrees very closely with the complex equations of the theory—a fantastic validation of the quantum theory of the vacuum. It is a triumph of the human mind to understand so much about empty space.

The concept of empty space—and Nothingness—played a major role in modern physics even before our understanding of the quantum vacuum. According to findings in the mid 19th century, light is a traveling wave of electromagnetic energy, and it was conventional wisdom that all waves, such as sound waves and water waves, required a material medium to carry them along. Take the air out of a room, and you will not hear someone speaking. Take the water out of a lake, and you cannot make waves. The material medium hypothesized to convey light was a gossamer substance called the “ether.” Because we can see light from distant stars, the ether had to fill up all space. Thus, there was no such thing as empty space. Space was filled with the ether.

In 1887, in one of the most famous experiments in all of physics, two American physicists at what is now Case Western Reserve University in Cleveland, Ohio attempted to measure the motion of the earth through the ether. Their experiment failed. Or rather, they could not detect any effects of the ether. Then, in 1905, the 26-year-old Albert Einstein proposed that the ether did not exist. Instead, he hypothesized that light, unlike all other waves, could propagate through completely empty space. All this was before quantum physics.

That denial of the ether, and hence embrace of a true emptiness, followed from a deeper hypothesis of the young Einstein: There is no condition of absolute rest in the cosmos. Without absolute rest, there cannot be absolute motion. You cannot say that a train is moving at a speed of 50 miles per hour in any absolute sense. You can say only that the train is moving at 50 miles per hour relative to another object, like a train station. Only the relative motion between two objects has any meaning. The reason Einstein did away with the ether is because it would have established a reference frame of absolute rest in the cosmos. With a material ether filling up all space, you could say whether an object is at rest or not, just as you can say whether a boat in a lake is at rest or in motion with respect to the water. So, through the work of Einstein, the idea of material emptiness, or Nothingness, was connected to the rejection of absolute rest in the cosmos. In sum, first there was the ether filling up all space. Then Einstein removed the ether, leaving truly empty space. Then other physicists filled space again with quantum fields. But quantum fields do not restore a reference frame of absolute rest because they are not a static material in space. Einstein’s principle of relativity remained.

One of the pioneers of quantum field theory was the legendary physicist Richard Feynman, a professor at Caltech and a member of my thesis committee. In the late 1940s, Feynman and others developed the theory of how electrons interact with the ghostly particles of the vacuum. Earlier in that decade, as a cocky young scientist, he had worked on the Manhattan Project. By the time I knew him at Caltech, in the early 1970s, Feynman had mellowed a bit but was still ready to overturn received wisdom at the drop of a hat. Every day, he wore white shirts, exclusively white shirts, because he said they were easier to match with different colored pants, and he hated to spend time fussing about his clothes. Feynman also had a strong distaste for philosophy. Although he had quite a wit, he viewed the material world in a highly straightforward manner, without caring to speculate on the purely hypothetical or subjective. He could and did talk for hours about the behavior of the quantum vacuum, but he would not waste a minute on philosophical or theological considerations of Nothingness. My experience with Feynman taught me that a person can be a great scientist without concerning him or herself with questions of “Why,” which fall beyond the scientifically provable.

However, Feynman did understand that the mind can create its own reality. That understanding was revealed in the Commencement address he gave at my graduation from Caltech in 1974. It was a boiling day in late May, outdoors of course, and we graduates were all sweating heavily in our caps and gowns. In his talk, Feynman made the point that before publishing any scientific results, we should think of all the possible ways that we could be wrong. “The first principle” he said, “is that you must not fool yourself—and you are the easiest person to fool.”



IN THE Wachowski Brothers’ landmark film The Matrix (1999), we are well into the drama before we realize that all the reality experienced by the characters—the pedestrians walking the streets, the buildings and restaurants and night clubs, the entire cityscape—is an illusion, a fake movie played in the brains of human beings by a master computer. Actual reality is a devastated and desolate planet, in which human beings are imprisoned, comatose, in leaf-like pods and drained of their life energy to power the machines. I would argue that much of what we call reality in our lives is also an illusion, and that we are much closer to dissolution, and Nothingness, than we usually acknowledge.

Let me explain. A highly unpleasant idea, but one that has been accepted by scientists over the last couple of centuries, is that we human beings, and all living beings, are completely material. That is, we are made of material atoms, and only material atoms. To be precise, the average human being consists of about 7 x 1027 atoms (7,000 trillion trillion atoms)—65 percent oxygen, 18 percent carbon, 10 percent hydrogen, 3 percent nitrogen, 1.4 percent calcium, 1.1 percent phosphorous, and traces of 54 other chemical elements. The totality of our tissues and muscles and organs and brain cells is composed of these atoms. And there is nothing else. To a vast cosmic being, each of us would appear to be an assemblage of atoms. To be sure, it is a special assemblage. A rock does not behave like a person. But the mental sensations we experience as consciousness and thought are purely material consequences of the purely material electrical and chemical interactions between neurons, which in turn are simply assemblages of atoms. And when we die, this special assemblage disassembles. The total number of atoms in our body at our last breath remains constant. Each atom could be tagged and tracked as it subsequently mingled with air and water and soil. The material would remain, scattered about. Each of us is a temporary assemblage of atoms, not more and not less. We are all on the verge of material disassemblage and dissolution.

All that having been said, the sensation of consciousness is so powerful and compelling that we endow other human beings—i.e. certain other assemblages of atoms—with a transcendent quality, some nonmaterial and magnificent essence. And as the assemblage of atoms most important to each of us is our own self, we endow ourselves with a transcendent quality—a self, an ego, an “I-ness”—that blooms far larger and more significant than merely a collection of atoms.

Likewise, our human-made institutions. We endow our art and our cultures and our codes of ethics and our laws with a grand and everlasting existence. We give these institutions an authority that extends far beyond ourselves. But in fact, all of these are constructions of our minds. That is, these institutions and codes and their imputed meanings are all consequences of exchanges between neurons, which in turn are simply material atoms. They are all mental constructions. They have no reality other than that which we give them, individually and collectively.

The Buddhists have understood this notion for centuries. It is part of the Buddhist concepts of emptiness and impermanence. The transcendent, nonmaterial, long-lasting qualities that we impart to other human beings and to human institutions are an illusion, like the computer-generated world in The Matrix. It is certainly true that we human beings have achieved what, to our minds, is extraordinary accomplishment. We have scientific theories that can make accurate predictions about the world. We have created paintings and music and literature that we consider beautiful and meaningful. We have entire systems of laws and social codes. But these things have no intrinsic value outside of our minds. And our minds are a collection of atoms, fated to disassemble and dissolve. And in that sense, we and our institutions are always approaching Nothingness.

So where do such sobering thoughts leave us? Given our temporary and self-constructed reality, how should we then live our lives, as individuals and as a society? As I have been approaching my own personal Nothingness, I have mulled these questions over quite a bit, and I have come to some tentative conclusions to guide my own life. Each person must think through these profound questions for him or herself—there are no right answers. I believe that as a society we need to realize we have great power to make our laws and other institutions whatever we wish to make them. There is no external authority. There are no external limitations. The only limitation is our own imagination. So, we should take the time to think expansively about who we are and what we want to be.

As for each of us as individuals, until the day when we can upload our minds to computers, we are confined to our physical body and brain. And, for better or for worse, we are stuck with our personal mental state, which includes our personal pleasures and pains. Whatever concept we have of reality, without a doubt we experience personal pleasure and pain. We feel. Descartes famously said, “I think, therefore I am.” We might also say, “I feel, therefore I am.” And when I talk about feeling pleasure and pain, I do not mean merely physical pleasure and pain. Like the ancient Epicureans, I mean all forms of pleasure and pain: intellectual, artistic, moral, philosophical, and so on. All of these forms of pleasure and pain we experience, and we cannot avoid experiencing them. They are the reality of our bodies and minds, our internal reality. And here is the point I have reached: I might as well live in such a way as to maximize my pleasure and minimize my pain. Accordingly, I try to eat delicious food, to support my family, to create beautiful things, and to help those less fortunate than myself because those activities bring me pleasure. Likewise, I try to avoid leading a dull life, to avoid personal anarchy, and to avoid hurting others because those activities bring me pain. That is how I should live. A number of thinkers far deeper than I, most notably the British philosopher Jeremy Bentham, have come to these same conclusions via very different routes.

What I feel and I know is that I am here now, at this moment in the grand sweep of time. I am not part of the void. I am not a fluctuation in the quantum vacuum. Even though I understand that someday my atoms will be scattered in soil and in air, that I will no longer exist, that I will join some kind of Nothingness, I am alive now. I am feeling this moment. I can see my hand on my writing desk. I can feel the warmth of the sun through the window. And looking out, I can see the pine-needled path that goes down to the sea. Now.

~ Alan Lightman is a physicist, novelist, and professor of the practice of the humanities at the Massachusetts Institute of Technology. His latest book is The Accidental Universe: The World You Thought You Knew.