Saturday, April 26, 2014

Inner Speech Is Not So Simple: A Commentary on Cho and Wu (2013)

This new article from Frontiers in Psychiatry: Schizophrenia is a response to Cho and Wu (2013) on their proposal for the mechanisms of auditory verbal hallucination (AVH) in schizophrenia.

Inner speech is not so simple: a commentary on Cho and Wu (2013)

Peter Moseley [1] and Sam Wilkinson [2] 
1. Department of Psychology, Durham University, Durham, UK
2. Department of Philosophy, Durham University, Durham, UK
A commentary on
Mechanisms of auditory verbal hallucination in schizophrenia
by Cho R, Wu W (2013). Front. Psychiatry 4:155. doi: 10.3389/fpsyt.2013.00155

We welcome Cho and Wu’s (1) suggestion that the study of auditory verbal hallucinations (AVHs) could be improved by contrasting and testing more explanatory models. However, we have some worries both about their criticisms of inner speech-based self-monitoring (ISS) models and whether their proposed spontaneous activation (SA) model is explanatory.

Cho and Wu rightly point out that some phenomenological aspects of inner speech do not seem concordant with phenomenological aspects of AVH; Langdon et al. (2) found that, while many AVHs took the third person form (“he/she”), this was a relatively rare occurrence in inner speech, both for patients with a diagnosis of schizophrenia who experienced AVHs and control participants. This is indeed somewhat problematic for ISS models, notwithstanding potential problems with the introspective measures used in the above study. However, Cho and Wu go on to ask: “how does inner speech in one’s own voice with its characteristic features become an AVH of, for example, the neighbor’s voice with its characteristic features?” (p. 2). Here, it seems that Cho and Wu simply assume that inner speech is always experienced in one’s own voice, and are not aware of research suggesting that the presence of other people’s voices is exactly the kind of quality reported in typical inner speech. For example, McCarthy-Jones and Fernyhough (3) showed that it is common for healthy, non-clinical participants to report hearing other voices as part of their inner speech, as well as to report their inner speech taking on the qualities of a dialogic exchange. This is consistent with Vygotskian explanations of the internalization of external dialogs during psychological development (4). In this light, no “transformation” from one’s own voice to that of another is needed, and no “additional mechanism” needs to be added to the ISS model (5).

In any case, this talk of transformation is misleading. There is no experience of inner speech first, which is then somehow transformed. The question about whether inner speech is implicated in AVHs is about whether elements involved in the production of inner speech experiences are also involved in the production of some AVHs. There seems to be fairly strong evidence to support this.

That inner speech involves motoric elements has been empirically supported by several electromyographical (EMG) studies [e.g., Ref. (6)]. Later experiments made the connection between inner speech and AVH, showing that similar muscular activation is involved in AVH (7, 8). The involvement of inner speech in AVH is further supported by the findings from Gould (9), who showed that when his subjects hallucinated, subvocalizations occurred which could be picked up with a throat microphone. These subvocalizations were causally responsible for the AVHs, and not just echoing them (as has been hypothesized to happen in some cases of verbal comprehension [cf. e.g., Ref. (10)]) was suggested by Bick and Kinsbourne (11), who demonstrated that if people experiencing hallucinations opened their mouths wide, stopping vocalizations, then the majority of AVHs stopped.

Cho and Wu argue that ISS models are no better than SA models at explaining the specificity of AVHs to specific voices and content; we would argue that an ISS model, with recognition that inner speech is more complex than one’s own voice speaking in the first person, explains more than the SA model, because it explains why voices with a specific phenomenology are experienced in the first place, as opposed to more random auditory experiences that might be expected from SA in auditory cortex. The appeal to individual differences in gamma synchrony as an underlying mechanism of SA also does not seem capable of explaining why this would lead to activations of specific voice representations.

Cho and Wu go on to say that “once we allow that a given episode of AVH involves the features of another person’s voice with its characteristic acoustic features, it is simple to explain why the patient misattributes the event to another person: that is what it sounds like” (p. 2). Taken to its extreme, this implies that any episode of inner speech that involves a voice other than one’s own would be experienced as “non-self”, and hence experienced as similar to an AVH, a proposition that would clearly not find much support in empirical research. Taking this view, it is the SA model that needs an additional mechanism to explain why neuronal representations of other people’s voices are experienced not just as sounding like someone else’s voice, but also having the non-self-generated, alien quality associated with AVHs. This is exactly the type of mechanism built into ISS models of AVHs.

Indeed, the authors do go on to argue that many problems with the inner speech model of AVHs can be solved if we stop referring to “inner speech”, and instead refer to “auditory imagination”, which, supposedly, is characterized by actual acoustical properties, unlike inner speech (the authors do not cite any literature to support this claim). We would argue that this falls within the realm of typical inner speech, and that the view put forward by Cho and Wu is based on unexamined assumptions about the typical form of inner speech. We would argue that a separate “type” of imagery is not needed, and it is probable that inner speech recruits at least some mechanisms of auditory imagery. Therefore, it does not make sense to argue that AVHs resemble one, but not the other.

Finally, it should be pointed out that auditory cortical regions are not the only areas reported to lead to AVHs when directly stimulated; for example, Bancaud et al. (12) reported that stimulating the anterior cingulate cortex (ACC), an area often associated with error monitoring and cognitive control, caused auditory hallucinations, a finding that seems more compatible with self-monitoring accounts of AVH. Admittedly, it is possible that stimulation of ACC could have distal effects, also stimulating auditory cortical regions; we mention this finding simply to highlight the fact that the potential top–down effects of other brain regions on auditory cortical areas should not be overlooked.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

This research was supposed by a Wellcome Trust Strategic Award WT098455MA

1. Cho R, Wu W. (2013). Mechanisms of auditory verbal hallucination in schizophrenia. Front Psychiatry; 4:155. doi:10.3389/fpsyt.2013.00155 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
2. Langdon R, Jones SR, Connaughton E, Fernyhough C. (2009). The phenomenology of inner speech: comparison of schizophrenia patients with auditory verbal hallucinations and healthy controls. Psychol Med; 39(4):655–63. doi:10.1017/S0033291708003978  Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

3. McCarthy-Jones SR, Fernyhough C. (2011) . The varieties of inner speech: links between quality of inner speech and psychopathological variables in a sample of young adults. Conscious Cogn; 20(4):1586–93. doi:10.1016/j.concog.2011.08.005 Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
4. Fernyhough C. (2004). Alien voices and inner dialogue: towards a developmental account of auditory verbal hallucinations. New Ideas Psychol; 22(1):49–68. doi:10.1016/j.newideapsych.2004.09.001  CrossRef Full Text
5. McCarthy-Jones SR. (2012). Hearing Voices: the Histories, Causes and Meanings of Auditory Verbal Hallucinations. Cambridge: Cambridge University Press.

6. Jacobsen E. (1931). Electrical measurements of neuromuscular states during mental activities. VII. Imagination, recollection, and abstract thinking involving the speech musculature. Am J Physiol; 97:200–9.

7. Gould LN. (1948). Verbal hallucinations and activation of vocal musculature. Am J Psychiatry; 105:367–72.
8. McGuigan F. (1966). Covert oral behaviour and auditory hallucinations. Psychophysiology; 3:73–80. doi:10.1111/j.1469-8986.1966.tb02682.x  CrossRef Full Text
9. Gould LN. (1950). Verbal hallucinations as automatic speech – the reactivation of dormant speech habit. Am J Psychiatry; 107(2):110–9.

10. Watkins KE, Strafella AP, Paus T. (2003). Seeing and hearing speech excites the motor system involved in speech production. Neuropsychologia; 41:989–994. doi:10.1016/S0028-3932(02)00316-0  Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
11. Bick P, Kinsbourne M. (1987). Auditory hallucinations and subvocalizations in schizophrenics. Am J Psychiatry; 14:222–5.
12. Bancaud J, Talairach J, Geier S, Bonis A, Trottier S, Manrique M. (1976). Manifestations comportementales induites par la stimulation electrique du gyrus cingulaire anterieur chez l’homme. Rev Neurol; 132:705–24.

Full Citation: 
Moseley, P, and Wilkinson, S. (2014, Apr 22). Inner speech is not so simple: a commentary on Cho and Wu (2013). Frontiers in Psychiatry:Schizophrenia; 5:42. doi: 10.3389/fpsyt.2014.00042

Mechanisms of Auditory Verbal Hallucination in Schizophrenia (Cho and Wu, 2013)

This is an interesting article on the occurrence of auditory hallucinations in psychosis/schizophrenia. It comes from the open access journal, Frontiers in Psychiatry: Schizophrenia. Later today or tomorrow I will post a commentary on this article, which is also quite interesting (if you care at all about this kind of stuff).

A LOT of people experiencing post-traumatic stress disorder from childhood trauma (or developmental trauma) experience verbal hallucinations. Quite often, the voices are those of the abusers. The voices are often extremely critical and demeaning. Less often, they tell the survivor they should be dead, or the should hurt themselves in very specific ways (command hallucinations, which are always a bad sign).

Very infrequently, and generally only in survivors of ritual abuse, the voices are experienced as demons or devils, or even as vicious animals who are supposed to kill the survivor.

NOTE: the sketch above is by "brokenwings101" at deviantART.

Full Citation:

Cho, R, and Wu, W. (2013., Nov 27). Mechanisms of auditory verbal hallucination in schizophrenia. Frontiers in Psychiatry: Schizophrenia; 4:155. doi: 10.3389/fpsyt.2013.00155

Mechanisms of auditory verbal hallucination in schizophrenia

Raymond Cho [1,2] and Wayne Wu [3]
1. Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
2. Department of Psychiatry, University of Pittsburgh, Pittsburgh, PA, USA
3. Center for Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA
Recent work on the mechanisms underlying auditory verbal hallucination (AVH) has been heavily informed by self-monitoring accounts that postulate defects in an internal monitoring mechanism as the basis of AVH. A more neglected alternative is an account focusing on defects in auditory processing, namely a spontaneous activation account of auditory activity underlying AVH. Science is often aided by putting theories in competition. Accordingly, a discussion that systematically contrasts the two models of AVH can generate sharper questions that will lead to new avenues of investigation. In this paper, we provide such a theoretical discussion of the two models, drawing strong contrasts between them. We identify a set of challenges for the self-monitoring account and argue that the spontaneous activation account has much in favor of it and should be the default account. Our theoretical overview leads to new questions and issues regarding the explanation of AVH as a subjective phenomenon and its neural basis. Accordingly, we suggest a set of experimental strategies to dissect the underlying mechanisms of AVH in light of the two competing models.

We shall contrast two proposed mechanisms of auditory verbal hallucinations (AVH): (a) the family of self-monitoring accounts and (b) a less discussed spontaneous activity account. On the former, a monitoring mechanism tracks whether internal episodes such as inner speech are self- or externally generated while on the latter, spontaneous auditory activity is the primary basis of AVH. In one sense, self-monitoring accounts emphasize “top-down” control mechanisms; spontaneous activity accounts emphasize “bottom-up” sensory mechanisms. The aim of this paper is not to provide a comprehensive literature review on AVH as there have been recent reviews (1, 2). Rather, we believe that it remains an open question what mechanisms underlie AVH in schizophrenia, and that by drawing clear contrasts between alternative models, we can identify experimental directions to explain what causes AVH. Self-monitoring accounts have provided much impetus to current theorizing about AVH, but one salient aspect of our discussion is to raise questions as to whether such accounts, as currently formulated, can adequately explain AVH. We believe that there are in fact significant limitations to the account that have largely gone unnoticed. Still, both models we consider might hold, and this requires further empirical investigation. Conceptual and logical analysis, however, will play an important role in aiding empirical work.

Logical and Conceptual Issues Regarding Mechanisms of AVH

What is AVH? It is important to be rigorous in identifying what we are trying to explain, especially since clinical diagnosis of schizophrenia depends on patients’ reports of the phenomenology of their experience. Yet even among non-clinical populations, the concepts we use to categorize experiences may be quite fuzzy and imprecise (3). While there is little controversy that AVH involves language (emphasis on “verbal”) we restrict our attention to auditory experiences, namely where AVH is phenomenally like hearing a voice. Just as cognitive scientists distinguish between perception and thought in normal experience, we should where possible distinguish between auditory hallucination and thought phenomena (e.g., thought insertion). Admittedly, categorizing a patient’s experience as auditory or thought can be difficult, but we should first aim to explain the clear cases, those with clear auditory phenomenology. Finally, by “hallucination” we take a simple view: in AVH, these involve internal (auditory) representations that a verbalized sound occurs when there is no such sound.

One of the prevailing standard models of AVH invokes self- (or source-) monitoring (see Figure 1). This covers a family of models that share a core idea: a defect in a system whose role is to monitor internal episodes as self-generated. There is good evidence that patients with schizophrenia show defective self-monitoring on a variety of measures (4), and this may explain some positive symptoms such as delusions of control. It remains an open question, however, whether this defect is the causal basis of AVH.


Figure 1. Depiction of the two causal mechanisms for the generation of auditory verbal hallucination (AVH). The self-monitoring model is more complex than the spontaneous activity account.
We begin with proposals for the mental substrate of AVH: inner speech, auditory imagery, or auditory memory (5, 6). Most models take inner speech to be the substrate, but we think there are compelling phenomenological reasons against this (7, 8). Inner speech is generally in one’s own voice, is in the first-person point of view (“I”), and often lacks acoustical phenomenology (9). But even if there is controversy whether inner speech is auditory, there is no such controversy regarding AVH. As clinicians are aware, patients reflecting on the phenomenology of AVH typically report strong acoustical properties that are typically characteristic of hearing another person’s voice, where the voice typically speaks from a second- or third-personal point of view (“you” or “them” as opposed to “I”). Thus, what is typically represented in AVH experiences is starkly different from what is represented in normal inner speech experiences.

This point on phenomenological grounds indicates that a self-monitoring model that identifies inner speech as the substrate for AVH incurs an additional explanatory burden. It must provide a further mechanism that transforms the experience of the subject’s own inner voice, in the first-person and often lacking acoustical properties such as pitch, timbre, and intensity into the experience of someone else’s voice, in the second- or third-person, with acoustical properties. For example, how does inner speech in one’s own voice with its characteristic features become an AVH of, for example, the neighbor’s voice with its characteristic features? To point out that a patient mistakenly attributes the source of his own inner speech to the neighbor does not explain the phenomenological transformation. Indeed, once we allow that a given episode of AVH involves the features of another person’s voice with its characteristic acoustic features, it is simple to explain why the patient misattributes the event to another person: that is what it sounds like. Indeed, a phenomenological study found that in addition to being quite adept at differentiating AVH from everyday thoughts, patients found that the identification of the voices as another person’s voice was a critical perceptual feature that helped in differentiating the voices from thought (10). The point then is that inner speech models of AVH are more complex, requiring an additional mechanism to explain the distinct phenomenology of AVH vs. inner speech.

On grounds of simplicity, we suggest that self-monitoring accounts should endorse auditory imagination of another person’s voice as the substrate of AVH. If they do this, they obviate a need for the additional transformation step since the imagery substrate will already have the requisite phenomenal properties. Auditory imagination is characterized by acoustical phenomenology that is in many respects like hearing a voice: it represents another’s voice with its characteristic acoustical properties. Thus, our patient may auditorily imagine the neighbor’s voice saying certain negative things, and this leads to a hallucination when the subject loses track of this episode as self-generated. On this model, there is then no need for a mechanism that transforms inner speech phenomenology to AVH phenomenology. Only the failure of self-monitoring is required, a simpler mechanism. In this case, failure of self-monitoring might be causally sufficient for AVH. We shall take these imagination-based self-monitoring models as our stalking horse.

But what is the proposed “self-tagging” mechanism? We think self-monitoring theorists need more concrete mechanistic proposals here. The most detailed answer invokes corollary discharge, mostly in the context of forward models (1113). This answer has the advantage that it goes beyond metaphors, and we have made much progress in understanding the neurobiology of the corollary discharge signal and forward modeling [see Ref. (14) for review]. These ideas have an ancestor in von Helmholtz’s explanation of position constancy in vision in the face of eye movements: why objects appear to remain stable in position even though our eyes are constantly moving. Position constancy is sometimes characterized as the visual system’s distinguishing between self- and other-induced retinal changes, and this is suggestive toward what goes wrong in AVH.

Forward models, and more generally predictive models, build on a corollary discharge signal and have been widely used in the motor control literature [see Ref. (15) for review; for relevant circuitry in primates controlling eye movement, see Ref. (16)]. The basic model has also been extended to schizophrenia (1113), and a crucial idea is that the forward model makes a sensory prediction that can then suppress or cancel the sensory reafference. When the prediction is accurate, then sensory reafference is suppressed or canceled, and the system can be said to recognize that the episode was internally rather than externally generated.

There are, in fact, many details that must be filled in for this motor control model to explain AVH, but that is a job for the self-monitoring theorist. Rather, we think that failure of self-monitoring, understood via forward models, is still not sufficient for AVH, even with auditory imagination as the substrate. The reason is that there is nothing in the cancelation or suppression of reafference – essentially an error signal – that on its own says anything about whether the sensory input is self- or other-generated. Put simply: a zero error signal is not the same as a signal indicating self -generation, nor is a positive error signal the same as a signal indicating other-generation. After all, the computation of prediction error is used in other systems that have nothing to do with AVH, say movement control. Thus, while an error signal can be part of the basis of AVH, it is not on its own sufficient for it. Given the failure of sufficiency of self-monitoring models on their own to explain AVH, we believe that an additional mechanism beyond failure of self-monitoring will always be required to explain AVH. Specifically, the required mechanism, an interpreter, is one that explains the difference between self and other. The challenge for proponents of the account is to specify the nature of this additional mechanism [e.g., (17)]. As Figure 1 shows, the mechanism for AVH on self-monitoring accounts is quite complex.

We now turn to an alternative that provides a simple causally sufficient condition for AVH: AVH arise from spontaneous activity in auditory and related memory areas. We shall refer to this account as the spontaneous activity account [for similar proposals, see Ref. (2, 18, 19)]. The relevant substrates are the activation of specific auditory representation of voices, whether in imagination or memory recall (for ease of expression in what follows, we shall speak of activity in auditory areas to cover both sensory and relevant auditory memory regions, specifically the activation of representations of the voice that the subject typically experiences in AVH). We think that this should be the default account of AVH for the following reason: we know that appropriate stimulation of auditory and/or relevant memory areas is sufficient for AVH. Wilder Penfield provided such a proof of concept experiment.

Penfield and Perot (20) showed dramatically that stimulation along the temporal lobe resulted in quite complex auditory hallucinations. For instance, stimulations along the superior temporal gyrus elicited AVH with phenomenology typical of schizophrenia AVH, as noted in his case reports: “They sounded like a bunch of women talking together.” (622 p.); or more indistinct AVH – “Just like someone whispering, or something, in my left ear” or “a man’s voice, I could not understand what he said” (640 p.). Activity in these areas, spontaneously or induced, in the absence of an actual auditory input are cases of auditory hallucination: auditory experience of external sounds that do not in fact exist. The clinical entity epilepsy provides a similar, more naturalistic example of spontaneous cortical activity giving rise to AVH, in addition to a wide variety of other positive symptoms (21). The hypothesis of spontaneous activity is that AVH in schizophrenia derives from spontaneous auditory activation of auditory representations.

The basic idea of the spontaneous activity account is that the auditory system, broadly construed, encodes representations of previously heard voices and their acoustical properties. In the case of normal audition, some of these representations are also activated by actual voices in the environment, leading ultimately to auditory experiences that represent those voices. The idea of hallucination then is that these representations can be spontaneously activated in the absence of environmental sounds. Thus, the initiation of an episode of AVH is driven by spontaneous activation. Certainly, as in many cases of AVH, the experience can be temporally extended. This might result from some top-down influences such as the subject’s attention to the voice, which can further activate the relevant regions, leading to an extended hallucination. Indeed, there are reported cases of patients answering the AVH with inner speech in a dialog [12 of 29 patients in Ref. (9)]. Further, that top-down processes are abnormally engaged by bottom-up influences is suggested by the finding that there is a disturbance in connectivity from the superior temporal gyrus (sensory region) to the anterior cingulate cortex (involved in cognitive control) with AVH compared to patients without AVH and healthy controls (22). This raises two points: (1) that an inner speech response could induce additional activation of auditory representations and (2) that self-monitoring models that take inner speech as the AVH substrate are quite implausible here since the monitoring mechanism must go on and off precisely with the AVH and inner speech exchange.

Since the auditory representations spontaneously activated already encode a distinct person’s voice and its acoustical properties, there is no need for a system to interpret the voice represented as belonging to another person (recall the issue regarding inner speech above). That information about “otherness” is already encoded in the representation of another’s voice. In this way, the spontaneous activity account bypasses the complex machinery invoked in self-monitoring. On its face, then, the spontaneous activity account identifies a plausible sufficient causal condition for AVH that is much simpler than self-monitoring mechanisms. There are, of course, details to be filled in, but our point is to contrast two possible mechanisms: self-monitoring and spontaneous activity. We hope, at least, to have provided some initial reasons why the latter might be more compelling.

When a field is faced with two contrasting models, it must undertake specific experiments to see which may hold. Before we delve into concrete experimental proposals, we want to highlight some remaining conceptual points. First, the models are not contradictory, and thus both could be true. We might discover that across patients with AVH or across AVH episodes within a single patient, AVH divides between the two mechanisms. Second, both models make some similar predictions, namely that in AVH, we should see the absence of appropriate self-monitoring in an AVH episode, though for different reasons. On the self-monitoring account, this absence is due to a defect in the self-monitoring system; on the spontaneous activity account, this absence is due to spontaneous activity that bypasses self-monitoring. Critically, empirical evidence that demonstrates absence of appropriate self-monitoring during AVH does not support the self-monitoring account as against the spontaneous activity account. We need different experiments.

Finally, the spontaneous activity account is more parsimonious as seen in Figure 1, but is this really an advantage? Note that all accounts of AVH must explain (a) the spontaneity of AVH episodes (they often just happen) and (b) the specificity of phenomenology in AVH, namely negative content, second or third-person perspective, and a specific voice, identity and gender, etc. Both accounts can deal with the spontaneity: in self-monitoring, it is explained by spontaneous failure of self-monitoring so that AVH feels spontaneous; in the spontaneous activity account, it is the actual spontaneous activity of auditory areas.

A major challenge to all theories of AVH is to explain its specificity: why is it often a specific voice, negative in content, and focused on specific themes? Still, self-monitoring accounts face an additional challenge given that self-monitoring is a general mechanism applied to all internal episodes: why aren’t more internal auditory episodes experienced as “other.” For if a general self-monitoring mechanism for auditory processing fails, one would expect many more kinds of AVH: of their own voice as in playback, of environmental sounds, music, and so on [auditory non-verbal hallucinations are reported, but much less than voices; (23)]. Self-monitoring accounts could postulate a highly specific failure of self-monitoring, but note that this is no better than spontaneous activity theorists postulating spontaneous activation of specific representations. So, either self-monitoring mechanisms, being general, make predictions inconsistent with the facts or they are no better off on the specificity of AVH than the alternative spontaneous activation account.

Potential Experimental Directions

In the previous section, we provided conceptual and logical grounds distinguishing two mechanisms of AVH and why we might prefer the spontaneous activity account. Still, the issues are fundamentally empirical. There are two plausible, starkly different mechanisms to explain AVH. What experiments might tease them apart? Studies in schizophrenia have characterized the neural correlates of the AVH-prone trait or actual AVH events, yielding important information about the potential functional and neuroanatomic basis for AVH, for instance, identifying areas involved in speech generation and perception [e.g., (2426); also see meta-analyses by Jadri et al. (2); Modinos et al. (27); Palaniyappan et al. (28)], including evidence of competition for neurophysiologic resources that subserve normal processing of external speech (29). However, the studies have largely been correlative. For instance, the activity of Broca’s area as reported by McGuire et al. (24) could be interpreted as either due to sub-vocalizations that give rise to AVH per se but could also reflect responses to the AVH as commonly occurs. As such, prior work has largely lacked the experimental interventions that could establish causality and help to adjudicate between varying accounts. We outline potential interventions and their potential utility in deciding between the self-monitoring and spontaneous activity accounts.

The two accounts outlined above propose either lack of functioning (self-monitoring account) or inappropriate activation (spontaneous activation account) of neural/cognitive processes. Accordingly, potential experimental interventions could include perturbing self-monitoring processes vs. stimulation of sensory areas in healthy individuals to approximate the AVH experience reported by patients. One could also implement the same interventions in patients – perhaps to greater effect, as they could most authoritatively provide first-hand verification of whether such interventions reproduce AVH phenomenology. Finally, in patients who experience AVH as part of their illness, therapeutic interventions could remediate putative disturbances and could result in resolution of AVH symptoms; if such interventions could specifically target the processes under consideration, such an approach could provide strong support for a causal account of AVH (see Table 1).


Table 1. Auditory verbal hallucination mechanisms: proof of concept studies.

Testing Self-Monitoring Accounts

Would perturbing self-monitoring processes provide evidence that it is the basis of AVH in schizophrenia? This is not clear. For example, a straightforward prediction of certain self-monitoring accounts that identify inner speech as the relevant substrate would be that perturbing self-monitoring processes in healthy individuals would be causally sufficient for the experience of hearing one’s own voice but with attribution to an external source. Concretely interpreted, it would be akin to listening to a playback of one’s recorded voice. This result, however, would only show that such perturbation yields a distinct form of AVH, namely hallucination of one’s voice as in a playback [though “replay” of patients’ thoughts or speech did not emerge as a significant contributor to the cluster analytic descriptions of the phenomenology; (19)]. Given the phenomenological differences from typical AVH, additional mechanisms are required to explain the full array of AVH characteristics, as noted in our “neighbor” example above.

A more plausible substrate for AVH is auditory imagination of another’s voice, so in principle, perturbation of self-monitoring during such imagination might yield AVH phenomenologically similar to that in schizophrenia. This seems a plausible test of self-monitoring accounts, though its implementation would require more concrete localization of the relevant self-monitoring mechanisms [Jadri et al. (2) noted midline cortical structures typically implicated in self-monitoring paradigms did not emerge as significant in their meta-analyses of imaging studies of AVH]. The relevant experiments are yet to be done, but the claim that failure of self-monitoring is sufficient for AVH raises the question, noted above: why don’t patients exhibit a wider range of AVH phenomenology?

What of interventions in patient populations vis-a-vis self-monitoring mechanisms? Employing a similar approach in schizophrenia patients would lead to similar predictions as in non-clinical populations, subject to the same logical constraints. Accordingly, upon experimentally interfering with self-monitoring during inner speech, a patient may report that they now have the experience of AVH of their own voice that is notably novel and distinct from their already existing AVH. Alternatively, with intervention during auditory imagination, they may report that their AVH has simply worsened in intensity/frequency, retaining similar phenomenology.

Nevertheless, there will always remain a logical gap between these inductions of AVH in both healthy and patient populations: the mechanisms of AVH could still be driven by spontaneous activation even though these interventions show that disruption of self-monitoring suffice for a form of AVH. Thus, what is required is to manipulate the putative mechanism during episodes of AVH in patients. This is not to say, however, that the previous experiments are useless. Far from it. Minimally, we can treat them as proof of concept experiments. They can demonstrate that such mechanisms can do the purported work, namely the induction of AVH. It is worth emphasizing a difference between the two mechanisms: while the spontaneous activation account already has a proof of concept result (e.g., Penfield’s work), no such proof has yet been done for self-monitoring in respect of AVH.

How then to directly manipulate the purported mechanism? Since the self-monitoring approach postulates a loss of function, an obvious manipulation is to see if AVH is ameliorated by restoration of this function. Cognitive behavioral therapies tailored to psychosis treatment have been successful at improving symptoms but the targets of therapy (e.g., distress from psychosis, depression) as well as the benefits (e.g., positive and negative symptoms) have been non-specific (30). While such approaches have clear clinical value, a more ideal approach for elucidating AVH mechanisms would entail interventions that are tailored to specifically target source monitoring processes. A case report of such an approach reported on a patient whose most prominent clinical symptom was daily thought insertion experiences and whose training involved improving their ability to accurately recall the source of self- vs. experimenter-generated items (31). Interestingly, the thought insertion symptoms did not improve as would be hypothesized, but an auditory hallucinations subscale showed improvement with training. So, while this finding is limited by interpretive issues, sample size, and a task that only indirectly taps self-monitoring processes (i.e., involving recall of sources rather than a more on-line measure), larger studies with refined task paradigms could yield a more definitive test of monitoring accounts of AVH. Conversely, given a means to safely perturb self-monitoring, following the logic for healthy individuals, one could similarly (further) impair this process in patients with similar considerations of the array of possible phenomenological outcomes. These experimental proposals are summarized in Table 2.


Table 2. Auditory verbal hallucination mechanisms: testing self-monitoring account in schizophrenia.

Testing Spontaneous Activity Accounts

We have argued that a more parsimonious account of AVH involves susceptibility of relevant brain regions for spontaneous activation without external stimulation or volitional impetus. Such a mechanism could account for the full phenomenology of AVH, assuming that brain areas that represent the complex content and form of AVH show spontaneous activations of the sort found in normal auditory experience. Also consistent with the idea of spontaneous activation in sensory areas are findings from studies of auditory cortical responses in schizophrenia. The ability of cortical networks to coordinate their activity through gamma (30–80 Hz) oscillations is thought to be critical for the binding of perceptual features in creating coherent object representations (32). In studies of auditory processing, individuals with schizophrenia show reduced gamma synchrony compared with healthy controls (3337). Interestingly, however, patients show correlations between auditory hallucination severity and gamma synchrony (38), consistent with the idea that a preserved excitability of sensory cortical areas is necessary for inappropriate spontaneous activations giving rise to AVH.

On this account, instabilities in sensory cortical areas would lead to spontaneous activations, but only do so in a coordinated fashion in those individuals with preserved ability to sustain gamma synchrony, thus giving rise to AVH. There would be an absence of AVH both in patients with reduced capacity for sustaining gamma synchrony as well as healthy individuals, due, respectively, to the inability to sustain such coordinated activity necessary for the perception of AVH or the lack of such instabilities that would inappropriately activate the cortex. Notably, such spontaneous activations could bypass any monitoring process since there is no inner speech or other self-initiated processes to issue a corollary discharge that would engage such monitoring, impaired, or otherwise.

The most definitive experiments to distinguish between the two models involve interventions in the activity of relevant auditory areas. The logic is as follows: the spontaneous activation account holds that the basis of AVH in patients with schizophrenia is the aberrant activation of relevant sensory areas. Accordingly, if it were possible to suppress such activation “on-line” while patients were experiencing AVH and show that such suppression, say by TMS, attenuated the hallucinatory experience, this would be good evidence that such activity was causally necessary for AVH in schizophrenia. An intervention to test causal sufficiency would be to then stimulate the areas identified as necessary to see if AVH of the same kind could be generated. Positive results along both dimensions would be strong support for the spontaneous activity account.

This proposal has been indirectly tested by Hoffman and other groups making similar use of TMS as a treatment for auditory hallucinations (39, 40). In contrast to the event-related design described above, the focus by Hoffman and others has been on clinical efficacy rather than elucidating of mechanisms. Accordingly, TMS interventions have been applied according to protocols whose timing is determined irrespective of their precise relationship to the specific episodes of AVH. While results are varied, meta-analyses confirm the utility of such a therapeutic approach (41, 42). So, while an “on-line” cortical suppression-mediated reduction in AVH would be more compelling, the general efficacy of such an approach is consistent with a local cortical spontaneous activation account.

One complication with the previous experiment arises if self-monitoring accounts require the activity of relevant sensory areas in self-monitoring computations. Thus, intervention in sensory areas might also affect the processes the self-monitoring account invokes to explain AVH. To make progress here, self-monitoring theorists must explain what computational role the sensory areas might play. There seem to us two possibilities. First, the sensory areas might compute the signal that is then compared against the corollary discharge signal so as to compute prediction error, as in forward modeling. If so, the experiment we have just proposed would help to adjudicate between the two models. On the spontaneous activity account, suppression of activation in sensory areas would suppress AVH. The opposite result would be seen in the proposed self-monitoring account. Self signals are associated with zero or low error when the corollary discharge is compared with the reafference signal, so when sensory areas are suppressed by TMS, the prediction is that computed error will be larger yielding an “other” signal. Thus, AVH should be exacerbated when activity in sensory areas is suppressed.

A second possibility is more congenial with self-monitoring accounts that invoke auditory imagery, since the imagery requires activation of relevant sensory areas (5, 43). On this view, the areas in question precisely are the substrate of the experience; what is critical is that the subject loses track of the fact that they are actively imagining. Consequently, AVH results. This version of self-monitoring would then agree with spontaneous activity accounts in predicting that suppression of activity in auditory areas during AVH yields reduced AVH. The difference between the models, then, is that the self-monitoring account takes sensory activation to be driven top-down. Accordingly, there is an additional way to disrupt AVH according to this version of self-monitoring: disrupting the top-down signal driving sensory activation. The spontaneous activity account holds that as there is no need for such top-down signals as the only way to manipulate AVH is via manipulation of the sensory area. To experimentally separate these models, we again need concrete mechanistic proposals from the self-monitoring account so that possible experimental manipulations can be designed (see Table 3).


Table 3. Auditory verbal hallucination mechanisms: testing spontaneous activity account in schizophrenia.


There is much experimental work yet to be done on specific mechanisms for AVH. While we favor one model, our goal has been to clarify the conceptual landscape in the hopes of prompting more directed experiments to determine which is operative [though, as we noted, both or perhaps an integration of bottom-up and top-down accounts (44, 45) could be operative, yielding multiple mechanisms for AVH]. Having competing accounts on the field should aid focused inquiry on testing concrete mechanistic proposals. In doing this, we believe that we can make more progress toward understanding what causes AVH in patients with schizophrenia.

Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

This research was supported by K08 MH080329 and NARSAD awards to Raymond Cho and by the Pennsylvania Department of Health through a Commonwealth Universal Research Enhancement grant to Wayne Wu. We also would like to thank Dr. Robert Sweet for thoughtful comments on earlier versions of the manuscript.

References are available at the Frontiers site.

How the Brain Pays Attention: Identifying Regions of the Brain Dealing with Object-Based, Spacial Attention

New research has identified a brain circuit that is key to shifting attention from one object to another. The researchers found that there is object-based attention and spatial attention, each of which is centered in different parts of the brain.

The prefrontal cortex ("inferior frontal junction (IFJ), which controls visual processing areas that are tuned to recognize a specific category of objects") is where the brain is able to switch attention from one target to another.

How the brain pays attention: Identifying regions of the brain dealing with object-based, spacial attention

Date: April 10, 2014
Source: Massachusetts Institute of Technology

A brain circuit that's key to shifting our focus from one object to another has been identified by neuroscientists. The new findings suggest that there are two types of attention that have similar mechanisms involving related brain regions: object-based attention, and spatial attention. In both cases, the prefrontal cortex -- the control center for most cognitive functions -- appears to take charge of the brain's attention and control relevant parts of the visual cortex, which receives sensory input.

Picking out a face in the crowd is a complicated task: Your brain has to retrieve the memory of the face you're seeking, then hold it in place while scanning the crowd, paying special attention to finding a match.

A new study by MIT neuroscientists reveals how the brain achieves this type of focused attention on faces or other objects: A part of the prefrontal cortex known as the inferior frontal junction (IFJ) controls visual processing areas that are tuned to recognize a specific category of objects, the researchers report in the April 10 online edition of Science.

Scientists know much less about this type of attention, known as object-based attention, than spatial attention, which involves focusing on what's happening in a particular location. However, the new findings suggest that these two types of attention have similar mechanisms involving related brain regions, says Robert Desimone, the Doris and Don Berkey Professor of Neuroscience, director of MIT's McGovern Institute for Brain Research, and senior author of the paper.

"The interactions are surprisingly similar to those seen in spatial attention," Desimone says. "It seems like it's a parallel process involving different areas."

In both cases, the prefrontal cortex -- the control center for most cognitive functions -- appears to take charge of the brain's attention and control relevant parts of the visual cortex, which receives sensory input. For spatial attention, that involves regions of the visual cortex that map to a particular area within the visual field.

In the new study, the researchers found that IFJ coordinates with a brain region that processes faces, known as the fusiform face area (FFA), and a region that interprets information about places, known as the parahippocampal place area (PPA). The FFA and PPA were first identified in the human cortex by Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT.

The IFJ has previously been implicated in a cognitive ability known as working memory, which is what allows us to gather and coordinate information while performing a task -- such as remembering and dialing a phone number, or doing a math problem.

For this study, the researchers used magnetoencephalography (MEG) to scan human subjects as they viewed a series of overlapping images of faces and houses. Unlike functional magnetic resonance imaging (fMRI), which is commonly used to measure brain activity, MEG can reveal the precise timing of neural activity, down to the millisecond. The researchers presented the overlapping streams at two different rhythms -- two images per second and 1.5 images per second -- allowing them to identify brain regions responding to those stimuli.

"We wanted to frequency-tag each stimulus with different rhythms. When you look at all of the brain activity, you can tell apart signals that are engaged in processing each stimulus," says Daniel Baldauf, a postdoc at the McGovern Institute and the lead author of the paper.

Each subject was told to pay attention to either faces or houses; because the houses and faces were in the same spot, the brain could not use spatial information to distinguish them. When the subjects were told to look for faces, activity in the FFA and the IFJ became synchronized, suggesting that they were communicating with each other. When the subjects paid attention to houses, the IFJ synchronized instead with the PPA.

The researchers also found that the communication was initiated by the IFJ and the activity was staggered by 20 milliseconds -- about the amount of time it would take for neurons to electrically convey information from the IFJ to either the FFA or PPA. The researchers believe that the IFJ holds onto the idea of the object that the brain is looking for and directs the correct part of the brain to look for it.

Further bolstering this idea, the researchers used an MRI-based method to measure the white matter that connects different brain regions and found that the IFJ is highly connected with both the FFA and PPA.

Members of Desimone's lab are now studying how the brain shifts its focus between different types of sensory input, such as vision and hearing. They are also investigating whether it might be possible to train people to better focus their attention by controlling the brain interactions involved in this process.

"You have to identify the basic neural mechanisms and do basic research studies, which sometimes generate ideas for things that could be of practical benefit," Desimone says. "It's too early to say whether this training is even going to work at all, but it's something that we're actively pursuing."

Story Source:
The above story is based on materials provided by Massachusetts Institute of Technology. The original article was written by Anne Trafton. Note: Materials may be edited for content and length.

Journal Reference:
Baldauf, D, and Desimone, R. (2014, Apr 25). Neural Mechanisms of Object-Based Attention. Science; 344 (6182): 424-427.  DOI: 10.1126/science.1247003
* * * * *

Neural Mechanisms of Object-Based Attention

Daniel Baldauf, Robert Desimone


How we attend to objects and their features that cannot be separated by location is not understood. We presented two temporally and spatially overlapping streams of objects, faces versus houses, and used magnetoencephalography and functional magnetic resonance imaging to separate neuronal responses to attended and unattended objects. Attention to faces versus houses enhanced the sensory responses in the fusiform face area (FFA) and parahippocampal place area (PPA), respectively. The increases in sensory responses were accompanied by induced gamma synchrony between the inferior frontal junction, IFJ, and either FFA or PPA, depending on which object was attended. The IFJ appeared to be the driver of the synchrony, as gamma phases were advanced by 20 ms in IFJ compared to FFA or PPA. Thus, the IFJ may direct the flow of visual processing during object-based attention, at least in part through coupled oscillations with specialized areas such as FFA and PPA. 

Editor's Summary

House or Face?

The neural mechanisms of spatial attention are well known, unlike nonspatial attention. Baldauf and Desimone (p. 424, published online 10 April) combined several technologies to identify a fronto-temporal network in humans that mediates nonspatial object-based attention. There is a clear top-down directionality of these oscillatory interactions, establishing the inferior-frontal cortex as a key source of nonspatial attentional inputs to the inferior-temporal cortex. Surprisingly, the mechanisms for nonspatial attention are strikingly parallel to the mechanisms of spatial attention.

Friday, April 25, 2014

A Trauma-Based Model of Mental Illness (preliminary thoughts)

Here is another section (still in the process of being written) from the paper I have been working on for a couple of months now - or maybe it will be a monograph, since it keeps getting longer and longer.

This section (much of which is still missing citations) proposes a new model of mental illness that does away with many of the diagnoses we now find in the DSM. Rather, it proposes a trauma-based model that sees symptoms as adaptations to the traumatic experience.

As I said, this is VERY preliminary - just began writing it yesterday. Any feedback is welcome.

A Trauma-Based Model of Mental Illness

It is my belief, based on years of reading the trauma literature and working with sexual trauma clients in therapy, that nearly all the traits we label as mental illness are more accurately understood as adaptations (or clusters of adaptations) to traumatic experience, either interpersonal or "shock."
Interpersonal traumas are those occurring between people in relationship, such as neglect, abuse, bullying, and attachment failures. The younger one is when these traumas occur, the more profound their impact on brain development.

"Shock" traumas are those single frightening events that can seriously disrupt our lives and our basic understanding of the world. These may include natural disasters, accidents, stranger rape, muggings, and other unexpected, unpredictable violent disruptions of our lives.

The greater the severity of the trauma, the more extreme the adaptations a survivor makes to cope with the experience. Early interpersonal trauma tends to be more difficult to treat than shock traumas, unless the person experiencing the shock trauma has also experienced adverse child events. I propose that we can create a spectrum of how these adaptations are generated and how they function, ranging from less extreme to more extreme.

At one end we might have the adaptation cluster often labeled as an adjustment disorder, with anxiety or depression being common expressions. Addictions and other forms of self-numbing behavior would also likely be in the first half of the spectrum.

An issue with some of the adaptations, particularly addictions, is that they generally co-occur with other adaptations. For example, post-traumatic stress disorder (PTSD) often co-occurs with mood symptoms, addictions, or disordered personality structures.

PTSD would be somewhere near the middle, although its manifestation can be mild to severe. Further down the spectrum would be dissociative disorders, the most extreme form of PTSD, with the most extreme adaptation being dissociative identity disorder (DID).

At the far end would be full-blown psychosis, representing a cumulative experience so awful and unbearable that reality become intolerable, necessitating a retreat into an alternate reality often imbued with a sense of importance or specialness, which is even true in paranoid iterations of psychosis.

It's important to keep in mind that when these interpersonal traumas occur during development, they create changes in the way the brain is wired, particularly the right hemisphere, the source of affect regulation, interpersonal skills, and body-mind integration.

Likewise, many of the adaptations noted here will manifest in the brain as shrinkage of one set of circuits or enlargement of another. For example, PTSD can produce an enlarged amygdala and a smaller hippocampus. Prolonged environmental stress also generates excessive levels of cortisol and other stress hormones that can damage brain function and leave the survivor in a near constant state of hypervigilance.

How Trauma Changes the Brain

When confronted with a stressor, a series of events occurs in the body that generates what we now call the “fight or flight response.” This process evolved early in the history of life on Earth to allow organisms to act in a situation where their life was in danger. In response, the body generates the energy for either a “fight” or a “flight” through activation of the nervous system and the endocrine system in order to maximize resources for surviving the threat (the stressor).

The following is paraphrased from Neigh, Gillespie, and Nemeroff (2009).

Researchers have identified two phases to this process. When the stressor is detected, the initial phase of the stress response begins. The sympathetic nervous system (associated with action, the "fight or flight" response) releases norepinephrine from nerve terminals and epinephrine from the adrenal medulla into the general circulation. Both of these neurochemicals and stimulants in their effects on the body.

In the secondary phase, moments later, corticotropin releasing factor (CRF) is released by “parvocellular neurons of the hypothalamic paraventricular nucleus into the hypothalamo-hypophyseal portal system for transport to the anterior pituitary gland where it stimulates the release of adrenocorticotropic hormone (ACTH) into the general circulation” (Swanson, Sawchenko, Rivier, & Vale, 1983; cited in Neigh, Gillespie, and Nemeroff, 2009). The ACTH travels to the adrenal cortex where it stimulates the release of glucocorticoids (cortisol is the primary stress hormone in primates). It generally takes several minutes for these processes, which are characteristic of the hypothalamic-pituitary-adrenal (HPA) axis stress response, to become fully activated.
Following the crisis, activity in the HPA axis is dampened through negative feedback (the parasympathetic nervous system, associated with "recuperation") via stimulation of glucocorticoid receptors within the hippocampus, hypothalamus, and anterior pituitary (Jacobson & Sapolsky, 1991). When there is a crisis, this stress response allows an organism to shift biological resources away from whatever activity was the focus and engages physiological functions that promote survival.

However, if the stress response becomes chronic due to repeated exposure to stressors, or a physiological deficit in the negative feedback system (or both), the organism experiences an on-going excess in stress hormone levels, which can trigger pathological changes in a variety of physiological systems, leading to stress-related diseases (McEwen, 2008).
This near-constant state of "activation" also leads to many of the symptoms of PTSD, including anxiety, memory deficits, hypervigilance, and the exaggerated startle response. The inability or failure of the body to metabolize the stress hormones, representing in essence that the situation cannot be escaped, results in the third and fourth of the Four F's - fight, flight, freeze, and fold. The freeze response is the most common experience for those who experienced on-going trauma, and the fold represents complete surrender, a profound state of "giving up."   
Whitehouse and Heller explains it this way:
Part of the problem is that when these states occur, discharge of the intense energies mobilized to meet threat often becomes thwarted. Often we just don't have the time necessary to complete them. Nevertheless, the survival energy has mobilized for fight or flight, but literally has no place to go and ends being converted into symptoms. (Whitehouse & Heller, 2008)
The freeze response (fold is very rare, so it will not be discussed here) is characterized by a simultaneous activation of the sympathetic and parasympathetic nervous systems. According to Peter Levine, the creator of Somatic Experiencing:
We have several synonyms for freeze, including dissociation, immobility, spacing out, deer in the headlights look. In the healthy nervous system it still serves and protects us humans, but often freeze is associated with the residual crippling effects of trauma. Here's what happens that causes humans to get stuck in trauma. (Levine, 1992)
References (partial)
  • Neigh, GN, Gillespie, CF, and Nemeroff, CB. (2009, Aug 6). The Neurobiological Toll of Child Abuse and Neglect. Trauma Violence Abuse; 10: 389-410. DOI: 10.1177/1524838009339758
  • Jacobson, L., & Sapolsky, R. (1991). The role of the hippocampus in feedback regulation of the hypothalamic-pituitary-adrenocortical axis. Endocrine Reviews, 12, 118-134.
  • McEwen, BS. (2008). Central effects of stress hormones in health and disease: Understanding the protective and damaging effects of stress and stress mediators. European Journal of Pharmacology, 583, 174-185.
  • Whitehouse, B., & Heller, DP. (2008). Heart Rate in Trauma: Patterns Found in Somatic Experiencing and Trauma Resolution. Biofeedback, 36(1).
  • Levine, P. (1992). Somatic Experiencing. The Foundation for Human Enrichment. http://www. traumahealing. com/index. html.

Brain: The Last Enigma (Documentary)


From New Atlantis Full Documentaries this is a slightly dated but interesting little documentary on the enigma of the human brain. Among the experts interviewed are brain scientists Steven Rose, Wolf Singer, Carlos Belmonte, and Vilayanur Ramachadran (among others).

The central focus is on the puzzle of how brain creates mind, the well-known "hard problem" of philosophy and neuroscience.

Brain: The Last Enigma (2003)

Top Documentary Films

Man invents faster every day in the pursuit of scientific knowledge. From the beginning of the universe or the birth of a star to the origin of life and genetic inheritance, the enigmas of existence have progressively been figured out. However, the human brain which has allowed the man to achieve this knowledge is still a mystery.

The essence of human intelligence is summarized in music - the perfect combination of reason and emotion. For example, each member of a musical quartet, overflowing with a wind of sensations, uses his memory to remember the piece glancing at the musical score every once in awhile and concentrating to create a virtuoso performance in unison with his companions.

The technique is already subconscious while their attention is aimed at creating the most moving performance. This is the objective, to provoke an emotion that, in this case, the listeners can intimately share and which arouses sensations of pleasure and sometimes rejection in the mind.

In order for this to happen music needs memory, intelligence and will. Where are they and why are they produces? These abilities come from the brain, the corporeal organ where all our thoughts and actions are developed. This is also the place where the mind is found as well as the essence of a person. What she thinks, what she feels and what she imagines. It is the hotbed of sensations and feelings, of images and ideas of each individual human being.

Due to its complexity the brain is one of the territories under scientific investigation with a most mysteries. The basis of its biology is known as well as some models of its functionality, but yet it's not understood why some things are remembered while others are not? Why such a strange mechanism like the human mind worries about itself, about the origin of life, and about the meaning of death?

Watch the full documentary now - 51 min

Teenage Brainstorm (Dr. Dan Siegel) - All in the Mind

Last weekend's episode of All in the Mind featured an interview with author and neuropsychiatrist Dr. Daniel Siegel, talking about his most recent book on the teenage brain, Brainstorm: The Power and Purpose of the Teenage Brain (2104).

Teenage Brainstorm

Sunday 20 April 2014
All in the Mind | Lynne Malcolm

When you’re a teenager, life is on fire, wildly exciting with limitless possibilities. It can also be overwhelming and dangerous. In the past raging hormones have been blamed – but we’re now learning that it’s down to the very particular and important way that the adolescent brain develops. Professor Dan Siegel has researched the brain and emotional development of children and now he focuses on the emerging adolescent mind during the years between 12 and 24. With a new understanding of the science and purpose behind this stage of development he suggests ways for young people to capture the positive essence of adolescence, based on mindful awareness techniques.

Audio: Hear about the Wheel of Awareness practice, one of Dr Dan Siegel’s mindsight exercises | Download MP3 (2.9MB)


Professor Daniel J. SiegelClinical professor of psychiatry, University of California Los Angeles, Author


Brainstorm: The Power and Purpose of the Teenage Brain
Daniel J. Siegel (2014)
An Inside-Out Guide to The Emerging Adolescent Mind, Ages 12 to 24

Mindsight: The New Science of Personal Transformation
Daniel J. Siegel (2010)
Change your brain and your life

Further Information  

Thursday, April 24, 2014

Cognitive Skills Decline from the Age of 24, Especially on StarCraft 2

Okay, I admit when I saw this headline, my first thought was, "Well, sh!t, that was nearly half a lifetime ago. I'm screwed." Fortunately, I know better than to trust headlines (which is why I changed it for the title of this post). The study is based on ability to play a video game called StarCraft 2.

The younger the players the better their skills on 5 specific measures:
  • Looking-doing latency (similar to reaction time)
  • Dual-task performance
  • Total reported hours of StarCraft 2 experience
  • Effective use of hotkeys
  • Effective management of view-screens/maps
The older players, however, adapted to their reaction time limitations and remained competitive.

"Older players, though slower, seem to compensate by employing simpler strategies and using the game's interface more efficiently than younger players, enabling them to retain their skill, despite cognitive motor-speed loss."
So maybe I am over the hill for video game play, but that's cool. I would not trade the experience and wisdom I have now for youth for any amount of money.

Our cognitive skills decline from the age of 24, but there is hope

Saturday 19 April 2014 
Written by David McNamee 
  If you are an adult who has ever been told by a partner or colleague that you are "too old to be playing video games," then they may well have a point. A new study - using a video game as a test - has found that people over the age of 24 are past their peak in terms of cognitive motor performance.

Generally, the researchers behind the new study observe, people tend to think of middle age as being around 45 years of age - around the time when age-related declines in cognitive-motor functioning become obvious.

But there is evidence that our memory and speed relating to cognitive tasks peak much earlier in our lives.

However, data on this is limited because most scientific studies examining the relationship of cognitive motor performance and aging focus on elderly populations, rather than when the decline in performance actually begins.

The authors note that some researchers have investigated the origins of cognitive motor performance decline but have only used simple reaction time tasks to measure performance. 
The new study - carried out by two doctoral students from Simon Fraser University in Burnaby, Canada, and their thesis supervisor - is built around a large-scale social science experiment involving the real-time space-faring strategy game StarCraft 2.

The data for the study came from the researchers replaying and analyzing 870 hours of gameplay from 3,305 StarCraft 2 players aged between 16 and 44.

How can StarCraft 2 be used to measure cognitive motor performance?

In the game, players have to successfully manage their civilization's economy and military growth, with the objective of tactically defeating their opponent's army.

All aspects of gameplay occur in real time, so the player is required to make a large number of adjustments continuously, and they must carefully make decisions and develop overall strategies in a manner that the researchers compare to chess or managing an emergency.

Attention to detail and fast reaction time are both important components of successful gameplay.

Attention to detail and fast reaction time are both important components of successful StarCraft 2 gameplay.

The researchers analyzed the following variables of gameplay:

  • Looking-doing latency (similar to reaction time)
  • Dual-task performance
  • Total reported hours of StarCraft 2 experience
  • Effective use of hotkeys
  • Effective management of view-screens/maps.
Complex statistical modeling then allowed the researchers to arrive at meaningful results relating to the players' game behaviors and response time.
"After around 24 years of age, players show slowing in a measure of cognitive speed that is known to be important for performance," reveals lead author and doctoral student Joe Thompson. "This cognitive performance decline is present even at higher levels of skill." 
But there is hope yet for you older gamers. Because - parallel to the cognitive performance decline in the over-24 year olds - Thompson and his colleagues noticed the older players adapting naturally to their cognitive disadvantages.

"Our research tells a new story about human development," claims Thompson.

"Older players, though slower, seem to compensate by employing simpler strategies and using the game's interface more efficiently than younger players, enabling them to retain their skill, despite cognitive motor-speed loss."
By efficiently manipulating the use of hotkeys and multiple screens, the older players were able to make up for their delayed speed in executing real-time commands.

"Our cognitive-motor capacities are not stable across our adulthood," suggests Thompson, "but are constantly in flux." He considers that the results of this study - his doctorate thesis, which is published in PLOS One - demonstrate how "our day-to-day performance is a result of the constant interplay between change and adaptation."

In January, Medical News Today reported on a study that linked slow reaction time to risk of early death.

Full Citation:
Thompson, JJ, Blair, MR, and Henry, AJ. (2014, Apr 9). Over the Hill at 24: Persistent Age-Related Cognitive-Motor Decline in Reaction Times in an Ecologically Valid Video Game Task Begins in Early Adulthood. PLOS One. DOI: 10.1371/journal.pone.0094215

Here is the abstract to the study (you can read the whole study by following the link below):

Over the Hill at 24: Persistent Age-Related Cognitive-Motor Decline in Reaction Times in an Ecologically Valid Video Game Task Begins in Early Adulthood

Joseph J. Thompson, Mark R. Blair, Andrew J. Henrey

Published: April 09, 2014
DOI: 10.1371/journal.pone.0094215


Typically studies of the effects of aging on cognitive-motor performance emphasize changes in elderly populations. Although some research is directly concerned with when age-related decline actually begins, studies are often based on relatively simple reaction time tasks, making it impossible to gauge the impact of experience in compensating for this decline in a real world task. The present study investigates age-related changes in cognitive motor performance through adolescence and adulthood in a complex real world task, the real-time strategy video game StarCraft 2. In this paper we analyze the influence of age on performance using a dataset of 3,305 players, aged 16-44, collected by Thompson, Blair, Chen & Henrey [1]. Using a piecewise regression analysis, we find that age-related slowing of within-game, self-initiated response times begins at 24 years of age. We find no evidence for the common belief expertise should attenuate domain-specific cognitive decline. Domain-specific response time declines appear to persist regardless of skill level. A second analysis of dual-task performance finds no evidence of a corresponding age-related decline. Finally, an exploratory analyses of other age-related differences suggests that older participants may have been compensating for a loss in response speed through the use of game mechanics that reduce cognitive load.