Showing posts with label attention. Show all posts
Showing posts with label attention. Show all posts

Tuesday, October 14, 2014

Steve Fleming - A Theory of Consciousness Worth Attending to

http://bookverdict.com/images/covers/9780199928644.jpg

Steve Fleming, whose blog is The Elusive Self, offers a review/overview of a new theory of consciousness, as outlined by Michael Graziano in his 2013 book, Consciousness and the Social Brain.

I happen to have this book, but I have not gotten around to reading it. In the front matter of the book there is a brief overview of the theory.

SPECULATIVE EVOLUTIONARY TIMELINE OF CONSCIOUSNESS

The theory at a glance: from selective signal enhancement to consciousness. About half a billion years ago, nervous systems evolved an ability to enhance the most pressing of incoming signals. Gradually, this attentional focus came under top-down control. To effectively predict and deploy its own attentional focus, the brain needed a constantly updated simulation of attention. This model of attention was schematic and lacking in detail. Instead of attributing a complex neuronal machinery to the self, the model attributed to the self an experience of X—the property of being conscious of something. Just as the brain could direct attention to external signals or to internal signals, that model of attention could attribute to the self a consciousness of external events or of internal events. As that model increased in sophistication, it came to be used not only to guide one’s own attention, but for a variety of other purposes including understanding other beings. Now, in humans, consciousness is a key part of what makes us socially capable. In this theory, consciousness emerged first with a specific function related to the control of attention and continues to evolve and expand its cognitive role. The theory explains why a brain attributes the property of consciousness to itself, and why we humans are so prone to attribute consciousness to the people and objects around us.

Timeline: Hydras evolve approximately 550 million years ago (MYA) with no selective signal enhancement; animals that do show selective signal enhancement diverge from each other approximately 530 MYA; animals that show sophisticated top-down control of attention diverge from each other approximately 350 MYA; primates first appear approximately 65 MYA; hominids appear approximately 6 MYA; Homo sapiens appear approximately 0.2 MYA
This article does a good job of explaining the theory, and in essence, reviewing the book.

A theory of consciousness worth attending to


Steve Fleming | The Elusive Self
Oct 12, 2014

There are multiple theories of how the brain produces conscious awareness. We have moved beyond the stage of intuitions and armchair ideas: current debates focus on hard empirical evidence to adjudicate between different models of consciousness. But the science is still very young, and there is a sense that more ideas are needed. At the recent Association for the Scientific Study of Consciousness* meeting in Brisbane my friend and colleague Aaron Schurger told me about a new theory from Princeton neuroscientist Michael Graziano, outlined in his book Consciousness and the Social Brain. Aaron had recently reviewed Graziano’s book for Science, and was enthusiastic about it being a truly different theory – consciousness really explained.

I have just finished reading the book, and agree that it is a novel and insightful theory. As with all good theories, it has a “why didn’t I think of that before!” quality to it. It is a plausible sketch, rather than a detailed model. But it is a testable theory and one that may turn out to be broadly correct.
When constructing a theory of consciousness we can start from different premises. “Information integration” theory begins with axioms of what consciousness is like (private, rich) in order to build up the theory from the inside. In contrast, “global workspace” theory starts with the behavioural data – the “reportability” of conscious experience – and attempts to explain the presence or absence of reports of awareness. Each theory has different starting points but ultimately aims to explain the same underlying phenomenon (similar to physicists starting either with the very large – planets – or the very small – atoms, and yet ultimately aiming for a unified model of matter).

Dennett’s 1991 book Consciousness Explained took the reportability approach to its logical conclusion. Dennett proposed that once we account for the various behaviours associated with consciousness – the subjective reports – there is nothing left to explain. There is nothing “extra” that underpins first-person subjective experience (contrast this with the “hard problem” view: there is something to be explained that cannot be solved within the standard cognitive model, which is exactly why it’s a hard problem). I read Dennett’s book as an undergraduate and was captivated that there might be a theory that explains subjective reports from the ground up, reliant only on the nuts and bolts of cognitive psychology. Here was a potential roadmap for understanding consciousness: if we could show how A connects to B, B connects to C, and C connects to the verbalization “I am conscious of the green of the grass” then we have done our job as scientists. But there was a nagging doubt: does this really explain our inner, subjective experience? Sure, it might explain the report, but it seems to be throwing out the conscious baby with the bathwater. In playful mood, some philosophers have suggested that Dennett himself might be a zombie because for him, the only relevant data on consciousness are the reports of others!

But the problem is that subjective reports are one of the few observable features we have to work with as scientists of consciousness. In Graziano’s theory, the report forms the starting point. He then goes deeper to propose a mechanism underpinning this report that explains conscious experience.


To ensure we’re on the same page, let’s start by defining the thing we are trying to explain. Consciousness is a confusing term – some people mean level of consciousness (e.g. coma vs. sleep vs. being awake), others mean self-consciousness, others mean the contents of awareness that we have when we’re awake – an awareness that contains some things, such as the green of an apple, but not others, such as feeling of the clothes against my skin or my heartbeat. Graziano’s theory is about the latter: “The purpose of this book is to present a theory of awareness. How can we become aware of any information at all? What is added to produce awareness?” (p. 14).

What is added to produce awareness? Cognitive psychology and neuroscience assumes that the brain processes information. We don’t yet understand the details of how much of this processing works, but the roadmap is there. Consider a decision about whether you just saw a faint flash of light, such as a shooting star. Under the informational view, the flash causes changes to proteins in the retina, which lead to neural firing, information encoding in visual cortex and so on through a chain of synapses to the verbalization “I just saw a shooting star over there”. There is, in principle, nothing mysterious about this utterance. But why is it accompanied by awareness?

Scientists working on consciousness often begin with the input to the system. We say (perhaps to ourselves) “neural firing propagating across visual cortex doesn’t seem to be enough, so let’s look for something extra”. There have been various proposals for this “something extra”: oscillations, synchrony, recurrent activity. But these proposals shift the goalposts – neural oscillations may be associated with awareness, but why should these changes in brain state cause consciousness? Graziano takes the opposite tack, and works from the motor output, the report of consciousness, inwards (it is perhaps no coincidence that he has spent much of his career studying the motor system). Awareness does not emanate from additional processes that are laid on top of vanilla information processing. Instead, he argues, the only thing we can be sure of about consciousness is that it is information. We say “I am conscious of X”, and therefore consciousness causes – in a very mainstream, neuroscientific way – a behavioural report. Rather like finding the source of a river, he suggests that we should start with these reports and work backwards up the river until we find something that resembles its source. It’s a supercharged version of Dennett: the report is not the end-game; instead, the report is our objective starting point.


I recently heard a psychiatrist colleague describe a patient who believed that a beer can inside his head was receiving radio signals that were controlling his thoughts. There was little that could be done to shake the delusion – he admitted it was unusual, but he genuinely believed that the beer can was lodged in his skull. As scientist observers we know this can’t be true: we can even place the man inside a CT scanner and show him the absence of a beer can.

But – and this is the crucial move – the beer can does exist for the patient. The beer can is encoded as an internal brain state, and this information leads to the utterance “I have a beer can in my head”. Graziano proposes that consciousness is exactly like the beer can. Consciousness is real, in the sense it is an informational state that leads us to report “I am aware of X”. But there are no additional properties in the brain that make something conscious, beyond the informational state encoding the belief that the person is conscious. Consciousness is, in this way, a collective delusion – if only one of us was constantly saying, “I am conscious” we might be as skeptical as we are in the case of the beer can, and scan his brain saying “But look! You don’t actually have anything that resembles consciousness in there”.

Hmm, I hear you say, this still sounds rather Dennettian. You’ve replaced consciousness with an informational state that leads to report. Surely there is more to it than that? In Graziano’s theory, the “something extra” is a model of attention, called the attention schema. The attention schema supplies the richness behind the report. Attention is the brain’s way of enhancing some signals but not others. If we’re driving along in the country and a sign appears warning of deer crossing the road, we might focus our attention on the grass verges. But attention is a process, of enhancement or suppression. The state of attention is not represented anywhere in the system [1]. Instead, awareness is the brain’s way of representing what attention is doing. This makes the state of attention explicit. By being aware of looking at my laptop while writing these words, the informational content of awareness is “My attention is pointed at my computer screen”.

Graziano suggests that the same process of modeling our own attentional state is applied to (and possibly evolved from) the ability to model the attentional focus of others [2]. And, because consciousness is a model, rather than a reality that either exists or does not, it has an appealing duality to its existence. We can attribute awareness to ourselves. But we can also attribute awareness to something else, such as a friend, our pet dog, or the computer program in the movie “Her”. Crucially, this attribution is independent of whether they each also attribute awareness to themselves.


The attention schema theory is a sketch for a testable theory of consciousness grounded in the one thing we can measure: subjective report. It provides a framework for new experiments on consciousness and attention, consciousness and social cognition, and so on.  On occasion it over-generalizes. For instance, free will is introduced as just another element of conscious experience. I found myself wondering how a model of attention could explain our experience of causing our actions, as required to account for the sense of agency. Instead, perhaps we should think of the attention schema as a prototype model for different elements of subjective report. For instance, a sense of agency could arise from a model of the decision-making process that allows us to say “I caused that to happen” – a decision schema, rather than an attention schema.

Of course, many problems remain unsolved. How does the theory account for unconscious perception? Does it predict when attention should dissociate from awareness? What would a mechanism for the attention schema look like? How is the modeling done? We may not yet have all the answers, but Graziano’s theory is an important contribution to framing the questions.

*I am currently Executive Director of the ASSC. The views in this post are my own and should not be interpreted as representing those of the ASSC.

[1] This importance of “representational redescription” of implicitly embedded knowledge was anticipated by Clark & Karmiloff-Smith (1992): “What seems certain is that a genuine cognizer must somehow manage a symbiosis of different modes of representation – the first-order connectionist and the multiple levels of more structured kinds” (p. 515). Importantly, representational redescription is not necessary to complete a particular task, but it is necessary to represent how the task is being completed. As Graziano says: “There is no reason for the brain to have any explicit knowledge about the process or dynamics of attention. Water boils but has no knowledge of how it does it. A car can move but has no knowledge of how it does it. I am suggesting, however, that in addition to doing attention, the brain also constructs a description of attention… and awareness is that description” (p. 25). And: “For a brain to be able to report on something, the relevant item can’t merely be present in the brain but must be encoded as information in the form of neural signals that can ultimately inform the speech circuitry.” (p. 147).

[2] Graziano suggests this is not a metacognitive theory of consciousness because it accounts not only for the abstract knowledge that we are aware but also the inherent property of being aware. But this assertion erroneously conflates metacognition with abstract knowledge. Instead, a model of another cognitive process, such as the attention schema as a model of attention, is inherently metacognitive. Currently there is little work on metacognition of attention, but such experiments may provide crucial data for testing the theory.

Wednesday, September 24, 2014

Focused Attention, Open Monitoring, and Loving Kindness Meditation: Effects on Attention, Conflict Monitoring, and Creativity – A Review

http://neuroconscience.files.wordpress.com/2013/04/image2_meditationbrain.jpg

In this new mini review from Frontiers in Cognition, Lippelt, Hommel, and Colzato compare three meditation types (focused attention, open monitoring and loving kindness) in terms of their effects on attention, conflict monitoring, and creativity.

The three research areas the authors covered in this review (attentional control, performance monitoring, and creativity or thinking style) seem to imply the operation of extended neural networks, which might suggest that meditation operates on neural communication, perhaps by impacting neurotransmitter systems. They speculate:
Finally, it may be interesting to consider individual differences more systematically. If meditation really affects interactions between functional and neural networks, it makes sense to assume that the net effect of meditation of performance depends on the pre-experimental performance level of the individual—be it in terms of compensation (so that worse performers benefit more) or predisposition (so that some are more sensitive to meditation interventions).

Full Citation: 
Lippelt DP, Hommel B and Colzato LS. (2014, Sep 23). Focused attention, open monitoring and loving kindness meditation: effects on attention, conflict monitoring, and creativity – A review. Frontiers in Psychology: Cognition. 5:1083. doi: 10.3389/fpsyg.2014.01083

Focused attention, open monitoring and loving kindness meditation: effects on attention, conflict monitoring, and creativity – A review


Dominique P. Lippelt, Bernhard Hommel and Lorenza S. Colzato
  • Cognitive Psychology Unit, Institute for Psychological Research and Leiden Institute for Brain and Cognition, Leiden University, Leiden, Netherlands
Meditation is becoming increasingly popular as a topic for scientific research and theories on meditation are becoming ever more specific. We distinguish between what is called focused Attention meditation, open Monitoring meditation, and loving kindness (or compassion) meditation. Research suggests that these meditations have differential, dissociable effects on a wide range of cognitive (control) processes, such as attentional selection, conflict monitoring, divergent, and convergent thinking. Although research on exactly how the various meditations operate on these processes is still missing, different kinds of meditations are associated with different neural structures and different patterns of electroencephalographic activity. In this review we discuss recent findings on meditation and suggest how the different meditations may affect cognitive processes, and we give suggestions for directions of future research.

Introduction


Even though numerous studies have shown meditation to have significant effects on various affective and cognitive processes, many still view meditation as a technique primarily intended for relaxation and stress reduction. While meditation does seem to reduce stress and to induce a relaxing state of mind, it can also have significant effects on how people perceive and process the world around them and alter the way they regulate attention and emotion. Lutz et al. (2008) proposed that the kind of effect meditation has is likely to differ according to the kind of meditation that is practiced. Currently the most researched types of meditation include focused attention meditation (FAM), open monitoring meditation (OMM), and loving-kindness meditation (LKM). Unfortunately, however, the methodological diversity across the available studies with regard to sample characteristics, tasks used, and experimental design (within vs. between group; with vs. without control condition) renders the comparison between them difficult. This review is primarily focused on FAM and OMM studies1 and on how these two (proto-)types of meditation are associated with different neural underpinnings and differential effects on attentional control, conflict monitoring, and creativity.


Meditation Types



Usually, FAM is the starting point for any novice meditator (Lutz et al., 2008; Vago and Silbersweig, 2012). During FAM the practitioner is required to focus attention on a chosen object or event, such as breathing or a candle flame. To maintain this focus, the practitioner has to constantly monitor the concentration on the chosen event so to avoid mind wandering (Tops et al., 2014). Once practitioners become familiar with the FAM technique and can easily sustain their attentional focus on an object for a considerable amount of time, they often progress to OMM. During OMM the focus of the meditation becomes the monitoring of awareness itself (Lutz et al., 2008; Vago and Silbersweig, 2012). In contrast to FAM, there is no object or event in the internal or external environment that the meditator has to focus on. The aim is rather to stay in the monitoring state, remaining attentive to any experience that might arise, without selecting, judging, or focusing on any particular object. To start, however, the meditator will focus on a chosen object, as in FAM, but will subsequently gradually reduce this focus, while emphasizing the activity of monitoring of awareness.


Loving-kindness meditation incorporates elements of both FAM and OMM (Vago and Silbersweig, 2012). Meditators focus on developing love and compassion first for themselves and then gradually extend this love to ever more “unlikeable” others (e.g., from self to a friend, to someone one does not know, to all living beings one dislikes). Any negative associations that might arise are supposed to be replaced by positive ones such as pro-social or empathic concern.

Meditation Types, Attentional Scope, and Endogenous Attention


Whereas some meditation techniques require the practitioners to focus their attention on only a certain object or event, other techniques allow any internal or external experiences or sensations to enter awareness. Different meditation techniques might therefore bias the practitioner to either a narrow or broad spotlight of attention. This distinction is thought to be most evident with regard to FAM and OMM. FAM induces a narrow attentional focus due to the highly concentrative nature of the meditation, whereas OMM induces a broader attentional focus by allowing and acknowledging any experiences that might arise during meditation.

In a seminal study, Slagter et al. (2007) investigated the effects of 3 months of intensive Vipassana meditation (an OMM-like meditation) training on the allocation of attention over time as indexed by the “attentional-blink” (AB) deficit, thought to result from competition between two target stimuli (T1 and T2) for limited attentional resources. After the training, because of the acquisition of a broader attentional scope, participants showed a smaller AB deficit as an indication of being able to distribute their brain-resource allocation to both T1 and T2. The reduced AB size was accompanied by a smaller T1-elicited P3b, a brain-potential thought to index attentional resource allocation.

A more recent study comparing meditators (trained in mindfulness-based stress-reduction) to non-meditators found that meditators show evidence of more accurate and efficient visual attention (Hodgins and Adair, 2010). Meditators monitored events more accurately in a concentration task and showed less interference from invalid cues in a visual selective attention task. Furthermore, meditators showed improved flexible visual attention by identifying a greater number of alternative perspectives in multiple perspectives images. Another study compared OMM and FAM meditators on a sustained attention task (Valentine and Sweet, 1999): OMM meditators outperformed FAM meditators when the target stimulus was unexpected. This might indicate that the OMM meditators had a wider attentional scope, even though the two meditator groups did not differ in performance when the stimulus was expected.


Electrophysiological evidence for meditation-induced improvements in attention comes from a recent study in which Vipassana meditators performed an auditory oddball task before and after meditation (in one session) and random thinking (in another session; Delgado-Pastor et al., 2013). The meditation session was composed by three parts. First, an initial part of self-regulation of attention focused on sensations from air entering and leaving the body at the nostrils. Second, a central part of focusing attention on sensations from all parts of the body while maintaining the non-reactivity and acceptance attitude. Last, a final brief part aimed on generating feelings of compassion and unconditional love to all living beings. Meditators showed greater P3b amplitudes to target tones after meditation than either before meditation or after the no-meditation session, an effect that is thought to reflect enhanced attentional engagement during the task.

Support for the assumption that FAM induces a narrow attentional focus comes from several studies that show that FAM increases sustained attention (Carter et al., 2005; Brefczynski-Lewis et al., 2007). Neuroimaging evidence by Hasenkamp et al. (2012) suggests that FAM is associated with increased activity in the right dorsolateral prefrontal cortex (dlPFC), which has been associated with “the repetitive selection of relevant representations or recurrent direction of attention to those items” (D’Esposito, 2007, p. 765 ). Thus, in the context of meditation experience, dlPFC might be involved in repeatedly redirecting or sustaining attention to the object of focus. It would be interesting to investigate whether this pattern of activation is unique to FAM or whether other kinds of meditation lead to similar increases in activity in the dlPFC. If the dlPFC is indeed involved in the repetitive redirection of attention to the same object of focus, then it should not be as active during OMM during which attention is more flexible and continuously shifted to different objects. Alternatively, however, if during OMM the meditator achieves a state of awareness where (only) awareness itself is the object of focus, the dlPFC might again play a role in maintaining this focus. Similarly, it would be interesting to examine how LKM modulates attentional processes and the activation of the dlPFC.

In a follow-up study, Hasenkamp and Barsalou (2012) found that, during rest, the right dlPFC connectivity to the right insula was improved in experienced meditators compared to novices. The authors suggest that improved connectivity with the right insula might reflect enhanced interoceptive attention to internal bodily states. In a support of this idea, a recent study reports that mindfulness training predicted greater activity in posterior insula regions during interoceptive attention to respiratory sensation (Farb et al., 2013). Various studies have shown theta activity to be increased during meditation, primarily OMM-like meditations (e.g., Baijal and Srinivasan, 2010; Cahn et al., 2010; Tsai et al., 2013; for review see Travis and Shear, 2010). This increase in theta activity, usually mid-frontal, has been suggested to be involved in sustaining internalized attention. As such, similar increases in theta activity would be expected for LKM during which attention is also internalized, but not during FAM where attention is explicitly focused on an external object, even though typically the object of meditation in FAM, at least for beginners, is the breath, which is internal.


Additionally, active mindfulness meditation (versus rest) was associated with increased functional connectivity between the dorsal attention network, the Default Mode Network and the right prefrontal cortex (Froeliger et al., 2012). Thus, meditation practice seems to enhance connectivity within and between attentional networks and a number of broadly distributed other brain regions subserving attention, self-referential, and emotional processes.

Meditation Types and Conflict Monitoring


A fundamental skill acquired through meditation is the ability to monitor the attentional focus in order to “redirect it” in the case of conflicting thoughts or external events. Not surprisingly, several studies have already shown improvements in conflict monitoring after meditation. Tang et al. (2007) investigated whether a training technique based on meditational practices called integrative body-mind training (IBMT; most similar to OMM) could improve performance on an attentional network task (ANT; Fan et al., 2002). The ANT was developed to keep track of three different measures, namely orientation, alerting, and conflict resolution. While IBMT had no effect on orienting and alerting scores, it did improve conflict resolution. In a similar study FAM and OMM were compared on an emotional variant of the ANT. Both types of meditation improved conflict resolution compared to a relaxation control group (Ainsworth et al., 2013). Surprisingly, there was no difference between the two meditation types, even though, mindfulness disposition at baseline (i.e., trait mindfulness) was also associated with improved conflict resolution.


Further evidence for improvements in conflict monitoring come from a study investigating the effect of 6-week long FAM trainig (versus relaxation training and a waiting-list group) on a discrimination task intended to investigate the relationship between attentional load and emotional processing (Menezes et al., 2013). Participants had to respond to whether or not the orientation of two lines presented to either side of an emotionally distracting picture was the same. Importantly, those who underwent a meditation or relaxation training commited fewer errors than the waiting list control group. Furthermore, error rates were lowest in the meditation group, higest in the waiting list group, while the relaxation group scored in between. With regard to emotional regulation meditators showed less emotional interference than the other two groups when attentional load was low, and only meditators showed a relationship between the amount of weekly practice and reductions in emotional interference.


In a study of Xue et al. (2011), meditation-naïve participants were randomly assigned to either an 11 h IBMT course or a relaxation training. Compared to the relaxation training, the IBMT group showed higher network efficiency and degree of connectivity of the anterior cingulate cortex (ACC). As the ACC is involved in processes such as self-regulation, detecting interference and errors, and overcoming impasses (e.g., Botvinick et al., 2004), improvements in ACC functioning might well be the neural mechanism by which IBMT improves conflict resolution. In an interesting study of Hasenkamp et al. (2012), experienced meditators engaged in FAM inside an fMRI scanner and pushed a button whenever they started to mind-wander. The moment of awareness of mind-wandering was associated with increased activity in the dorsal ACC. Thus, as the mind starts to wander during meditation, the ACC might detect this “error” and feed it back to executive control networks (Botvinick et al., 1999; Carter and van Veen, 2007), so that attention can be refocused. Various other studies have also shown improvements in ACC functioning after meditation (Lazar et al., 2000; Baerentsen et al., 2001; Tang et al., 2009, 2010). Hölzel et al. (2007) compared experienced and novice meditators during a concentrative meditation (akin to FAM) and found that the experienced meditators showed greater activity in the rostral ACC during meditation than the novices, even though the two groups did not differ on an arithmetic control task. Similar results were obtained in another study comparing novices and experienced meditators (Baron Short et al., 2007) by showing more activity in the ACC during FAM compared to a control task. The activity in the ACC was more consistent and sustained for experienced meditators. Related to that, Buddhist monks exhibited more activity in the ACC during FAM than during OMM (Manna et al., 2010). This suggests that the effects of meditation on the ACC and conflict monitoring do not seem to be limited to temporary state effects but carry over into daily life as a more stable “trait.” Future large scale longitudinal studies should to be conducted to address this issue and to disentangle short-term and long-term effects on conflict monitoring.


Improved conflict monitoring does not necessarily entail increased brain activity. Kozasa et al. (2012) compared meditators and non-meditators on a Stroop task in which semantic associations of words have to be suppressed to retrieve the color of the word. While behavioral performance was not significantly different for the two groups, compared to meditators, the non-meditators showed more activity in brain regions related to attention and motor control during incongruent trials. Given that the aim of many meditation techniques is to monitor the automatic arise of distractible sensations, such skill may become effortless by repeated meditation, therefore leading to less brain activity during the Stroop task. LKM has been shown to improve conflict resolution, as well, when LKM and a control group were compared on a Stroop task. The LKM group was faster in responding to both congruent and incongruent trials, and the difference between congruent and incongruent trials (the congruency effect) was smaller as well (Hunsinger et al., 2013). As LKM incorporates elements of both FAM and OMM, it would be interesting to investigate how the effect size associated with LKM may be positioned in between FAM and OMM.


Recently, meditators and non-meditators were compared with regard to measures of cortical silent period and short intra cortical inhibition over the motor cortex before and after a 60 min long meditation (for the meditators) or cartoon (for the non-meditators), respectively, measuring GABAB receptor-mediated inhibitory neurotransmission and GABAA receptor-mediated inhibitory neurotransmission (Guglietti et al., 2013). Given that deficits related to cortical silent periods in the motor cortex had been previously associated with psychiatric illness and emotional deregulation, the activity over the motor cortex was measured. No differences were found between meditators and non-meditators before the meditation/cartoon. However, after meditation there was a significant increase in GABAB activity in the meditator group. The authors suggest that “improved cortical inhibition of the motor cortex, through meditation, helps reduce perceptions of environmental threat and negative affect through top down modulation of excitatory neural activity” (Guglietti et al., 2013, p. 400). Future research might investigate whether similar GABA related mechanisms underlie the suppression of distracting stimuli during meditation and how different types of meditation might have distinguishable effects on these processes.

Meditation Types and Creativity


The scientific evidence regarding the connection between meditation and creativity is inconsistent. While some studies support a strong positive impact of meditation practice on creativity (Orme-Johnson and Granieri, 1977; Orme-Johnson et al., 1977), others found only a weak association or no effect at all (Cowger, 1974; Domino, 1977). Recently, Zabelina et al. (2011) found that a short-term effect of mindfulness manipulation (basically OMM) facilitated creative elaboration at high levels of neuroticism. As pointed out by Colzato et al. (2012), these inconsistencies might reflect a failure to distinguish between different and dissociable processes underlying creativity, such as convergent and divergent thinking (Guilford, 1950). Accordingly, Colzato et al. (2012) compared the impact of FAM and OMM on convergent thinking (a process of identifying one “correct” answer to a well-defined problem) and divergent thinking (a process aiming at generating many new ideas) in meditation practitioners. Indeed, the two types of meditation affected the two types of thinking in opposite ways: while convergent thinking tended to improve after FAM, divergent thinking was significantly enhanced after OMM. Colzato et al. (2012) suggest that FAM and OMM induce two different, to some degree opposite cognitive-control states that support state-compatible thinking styles, such as convergent and divergent thinking, respectively. In contrast to convergent thinking, divergent thinking benefits from a control state that promotes quick “jumps” from one thought to another by reducing the top-down control of cognitive processing—as achieved by OMM.


Conclusion



Research on meditation is still in its infancy but our understanding of the underlying functional and neural mechanisms is steadily increasing. However, a serious shortcoming in the current literature is the lack of studies that systematically distinguish between and compare different kinds of meditation on various cognitive, affective or executive control tasks—a criticism that applies to neuroscientific studies in particular. Further progress will require a better understanding of the functional aims of particular meditation techniques and their strategies to achieve them. It will also be important to more systematically assess short- and long-term effects of meditation, as well as the (not yet understood) impact of meditation experience (as present in practitioners but not novices). For instance, several approaches (like Buddhism) favor a particular sequence of acquiring meditation skills (from FAM to OMM) but evidence that this sequence actually matters is lacking. Moreover, the neural mechanisms underlying meditation effects are not well understood. It might be interesting that the three main research topics we have covered in the present review (attentional control, performance monitoring, and creativity or thinking style) imply the operation of extended neural networks, which might suggest that meditation operates on neural communication, perhaps by impacting neurotransmitter systems. Finally, it may be interesting to consider individual differences more systematically. If meditation really affects interactions between functional and neural networks, it makes sense to assume that the net effect of meditation of performance depends on the pre-experimental performance level of the individual—be it in terms of compensation (so that worse performers benefit more) or predisposition (so that some are more sensitive to meditation interventions).

Conflict of Interest Statement


The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


Footnotes


  1. ^ It is important to note that even though this mini review is based on the theoretical framework of distinguishing FAM and OMM, another one includes the distinction between concentrative meditations, practices that regulate or control attention/awareness, and meditation practices which instead do not explicitly target attentional/effortful control (Chiesa and Malinowski, 2011; see also Chiesa, 2012 for a recent review on the difficulty of defining Mindfulness). Moreover, Travis and Shear (2010) have pointed out a third meditation category besides FAM and OMM: the automatic self-transcending which trascends FAM and OMM through the absence of both (a) focus and (b) individual control or effort.

References at the Frontiers site

Sunday, June 15, 2014

Diane Hamilton - "Everything is Workable: A Zen Approach to Conflict Resolution"

http://ecx.images-amazon.com/images/I/41jjZHErfBL.jpg

Zen teacher and integral darling Diane Hamilton has been vigorously promoting her new book, Everything Is Workable: A Zen Approach to Conflict Resolution (Shambhala, 2013). Here is the publisher's ad copy for the book:
Using mindfulness to work with and resolve the inevitable interpersonal conflicts that arise in all areas of life.
Conflict is going to be part of your life—as long as you have relationships, hold down a job, or have dry cleaning to be picked up. Bracing yourself against it won’t make it go away, but if you approach it consciously, you can navigate it in a way that not only honors everyone involved but makes it a source of deep insight as well. Seasoned mediator Diane Hamilton provides the skill set you need to engage conflict with wisdom and compassion, and even—sometimes—to be grateful for it. She teaches how to:

• Cultivate the mirror-like quality of attention as your base
• Identify the three personal conflict styles and determine which one you fall into
• Recognize the three fundamental perspectives in any conflict situation and learn to inhabit each of them
• Turn conflicts in families, at work, and in every kind of interpersonal relationship into win-win situations
In the video below, Hamilton stopped by Google to give a Google Talk.

Diane Hamilton - "Everything is Workable: A Zen Approach to Conflict Resolution"

Published on Jun 13, 2014


Learn how to deal with conflicts more skillfully from state supreme court mediator and Zen master Diane Hamilton. Ignoring conflicts usually won't make them go away, but if you approach them consciously, you can navigate conflicts in ways that not only honors everyone involved but also makes them a source of deep insight as well. Diane will show you how to engage conflict with wisdom and compassion by:
- Cultivating the mirror-like quality of attention as your base
- Identifying three personal conflict styles and determine which ones you fall into
- Recognizing the three fundamental perspectives in any conflict situation
- Turning conflicts in families, at work, and in every kind of interpersonal situation into win-win situations
Reduce stress in your life by learning these techniques and transform the way you handle conflicts in your life.

Saturday, May 31, 2014

Ellen Langer — Science of Mindlessness and Mindfulness (NPR's On Being)


Ellen Langer was one of the trailblazers in seeing and researching the potential of mindfulness practice as an adjunct to psychotherapy and education. In this week's On Being podcast from NPR, host Krista Tippett speaks with Langer about her work, and about practicing mindfulness with meditation and without yoga.

Langer is the author of several influential books, especially Mindfulness and Counterclockwise: Mindful Health and the Power of Possibility

Ellen Langer — Science of Mindlessness and Mindfulness

On Being | May 29, 2014

Social psychologist Ellen Langer's unconventional studies have long suggested what brain science is now revealing: our experiences are formed by the words and ideas we attach to them. Naming something "play" rather than "work" can mean the difference between delight and drudgery. She is one of the early pioneers — along with figures like Jon Kabat-Zinn and Herbert Benson — in drawing a connection between mindlessness and unhappiness, between mindfulness and health. Dr. Langer describes mindfulness as achievable without meditation or yoga — as “the simple act of actively noticing things.”

Photo by Kris Krug - Dr. Ellen Langer presents at PopTech's annual conference at Camden, Maine, where she discussed the illusion of control, perceived control, successful aging, and decision-making.

Listen

Transcript


Voices on the Radio


Ellen Langer is a social psychologist and a professor in the Psychology Department at Harvard University. Her books include Mindfulness and Counterclockwise: Mindful Health and the Power of Possibility.

Production Credits
  • Host/Executive Producer: Krista Tippett
  • Executive Editor: Trent Gilliss
  • Senior Producer: Lily Percy
  • Technical Director: Chris Heagle
  • Associate Producer: Mariah Helgeson

Like-Minded Conversations



Jon Kabat-Zinn — Opening to Our Lives - Jon Kabat-Zinn has learned, through science and experience, about mindfulness as a way of life. This is wisdom with immediate relevance to the ordinary and extreme stresses of our time — from economic peril, to parenting, to life in a digital age.




Esther Sternberg — Stress and the Balance Within - The American experience of stress has spawned a multi-billion dollar self-help industry. Wary of this, Esther Sternberg says that, until recently, modern science did not have the tools or the inclination to take emotional stress seriously. She shares fascinating new scientific insight into the molecular level of the mind-body connection.




Richard Davidson — Investigating Healthy Minds - Neuroscientist Richard Davidson is revealing that the choices we make can actually “rewire” our brains. He’s studied the brains of meditating Buddhist monks, and now he’s using his research with children and adolescents to look at things like ADHD, autism, and kindness.

Pertinent Posts from the On Being Blog



A Little Bit of Mindfulness Meditation Can Reduce a Lot of Pain - Even novice meditators are able to curb their pain after a few training sessions in mindfulness meditation.



Meditation and Mindfulness for All of Us: Six Questions with Sharon Salzberg - One of the pioneering teachers of Buddhist thought and meditation in the U.S. answers our in-house "wannabe" mindfulness practitioner's questions on techniques and focus, and the balance of new technologies with human connection.



Sharing Gratitude and Releasing Mindfulness - A week of gratitude for our many gifts: from Walter Rauschenbusch's gorgeous prayer to Thich Nhat Hanh's guiding dharma talk.



Danish Filmmaker Spends Year in Wisconsin Documenting Contemplative Neuroscience Research with Children and Vets in "Free the Mind" - A Q+A with Phie Ambo on meditation, contemplative neuroscience, and what she learned while making the documentary Free the Mind on neuroscientist Richard Davidson.



An Aural Hike Through the Hoh Valley Rain Forest: A Soundscape Meditation - Take this mystical aural hike into the Hoh Rain Forest in Olympic National Park to One Square Inch of Silence — and experience the chirping twitter of the Western wren and the haunting call of the Roosevelt elk.



Questioning the Science of Happiness (Infographic) - Happiness. A word that gets bandied about quite a bit lately, and for good reason. An infographic that jogs a host of questions and insights.



Mastering the Hong and the Sau - A joyous monk at a meditation center in India teaches a young journalist how to breathe, one breath at a time.

Sunday, May 25, 2014

Most People Cannot Multitask, But a Few Are "Supertaskers"

While you are still likely to see "multitasking" as a preferred skill on job announcements, the science has been relatively clear - almost no one can multitask well. Granted, there a few people who are better than average.

But there is new research to suggest that maybe 1-2% of people can be supertaskers; no matter how many tasks they are juggling, they tend not to make mistakes.

Hmmm . . . I wonder how many of these people have ADD?

Multitask Masters

Posted by Maria Konnikova
May 7, 2014 | The New Yorker



In 2012, David Strayer found himself in a research lab, on the outskirts of London, observing something he hadn’t thought possible: extraordinary multitasking. For his entire career, Strayer, a professor of psychology at the University of Utah, had been studying attention—how it works and how it doesn’t. Methods had come and gone, theories had replaced theories, but one constant remained: humans couldn’t multitask. Each time someone tried to focus on more than one thing at a time, performance suffered. Most recently, Strayer had been focussing on people who drive while on the phone. Over the course of a decade, he and his colleagues had demonstrated that drivers using cell phones—even hands-free devices—were at just as high a risk of accidents as intoxicated ones. Reaction time slowed, attention decreased to the point where they’d miss more than half the things they’d otherwise see—a billboard or a child by the road, it mattered not.

Outside the lab, too, the multitasking deficit held steady. When Strayer and his colleagues observed fifty-six thousand drivers approaching an intersection, they found that those on their cell phones were more than twice as likely to fail to heed the stop signs. In 2010, the National Safety Council estimated that twenty-eight per cent of all deaths and accidents on highways were the result of drivers on their phones. “Our brain can’t handle the overload,” Strayer told me. “It’s just not made that way.”

What, then, was going on here in the London lab? The woman he was looking at—let’s call her Cassie—was an exception to what twenty-five years of research had taught him. As she took on more and more tasks, she didn’t get worse. She got better. There she was, driving, doing complex math, responding to barking prompts through a cell phone, and she wasn’t breaking a sweat. She was, in other words, what Strayer would ultimately decide to call a supertasker.

About five years ago, Strayer recalls, he and his colleagues were sorting through some data, and noticed an anomaly: a participant whose score wasn’t deteriorating with the addition of multiple tasks. “We thought, That can’t be,” he said. “So we spent about a month trying to see an error.” The data looked solid, though, and so Strayer and his colleagues decided to push farther. That’s what he was doing in London: examining individuals who seemed to be the exception to the multitasking rule. A thousand people from all over the U.K. had taken a multitasking test. Most had fared poorly, as expected; in the London lab were the six who had done the best. Four, Strayer and his colleagues found, were good—but not quite good enough. They performed admirably but failed to hit the stringent criteria that the researchers had established: performance in the top quartile on every individual measure that remained equally high no matter how many tasks were added on. Two made every cut—and Cassie in particular was the best multitasker he had ever seen. “It’s a really, really hard test,” Strayer recalls. “Some people come out woozy—I have a headache, that really kind of hurts, that sort of thing. But she solved everything. She flew through it like a hot knife through butter.” In her pre-test, Cassie had made only a single math error (even supertaskers usually make more mistakes); when she started to multitask, even that one error went away. “She made zero mistakes,” Strayer says. “And she did even better when she was driving.”

Strayer believes that there is a tiny but persistent subset of the population—about two per cent—whose performance does not deteriorate, and can even improve, when multiple demands are placed on their attention. The supertaskers are true outliers. According to Strayer, multitasking isn’t part of a normal distribution akin to birth weight, where even the lightest and heaviest babies fall within a relatively tight range around an average size. Instead, it is more like I.Q.: most people cluster in an average range, but there is a long tail where only a tiny fraction—single digits among thousands—will ever find themselves.

In the first controlled study of the supertasker phenomenon, in 2010, Strayer and Jason Watson, a cognitive neuroscientist, asked two hundred participants to complete a standard driving test that they had previously used to illustrate the perils of multitasking. In a simulator, each person would follow an intermittently braking car along a multi-lane highway, complete with on and off ramps, overpasses, and oncoming traffic. Their task was simple: keep your eyes on the road; keep a safe difference; brake as required. If they failed to do so, they’d eventually collide with their pace car.

Then came the multitasking additions. They would have to not only drive the car but follow audio instructions from a cell phone. Specifically, they would hear a series of words, ranging from two to five at a time, and be asked to recall them in the right order. And there was a twist. Interspersed with the words were math problems. If they heard one of those, the drivers had to answer “true,” if the problem was solved correctly, or “false,” if it wasn’t. They would, for instance, hear “cat” and immediately after, “is three divided by one, minus one, equal to two?” followed by “box,” another problem, and so on. Intermittently, they would hear a prompt to “recall,” at which point, they’d have to repeat back all the words they’d heard since the last prompt. The agony lasted about an hour and a half.

As expected, over ninety-seven per cent of the participants failed. They were just fine if they had to drive without worrying about math or word memorization, and they could memorize and do math all right if they didn’t also have to drive. But when the two tasks combined, their performance plummeted. Hidden in the averages, though, were five people, three men and two women, whose performance patterns didn’t change a bit, no matter how many things they were asked to take on. When they were just doing a single task, be it driving or completing the attention-span test, they were already exceptional. When they began to multitask, that exceptionality became all the more apparent. They performed as well as—and, in some cases, better than—when they’d been unitasking. By 2012, after Cassie and her other supertasking U.K. colleague had been tested, Strayer’s team had identified nineteen supertaskers in a sample of seven hundred.

What makes the supertaskers able to do what they do? Are most of us doomed to a unitasking future? Once he confirmed that the phenomenon was real and not a statistical or a laboratory fluke, Strayer, naturally, wondered exactly that. “When you see these people perform at this level, you wonder what makes them be able to behave the way they can. What can they tell us about attention?” he says. Until quite recently, that question was difficult to answer. There simply weren’t enough supertaskers around, and the cost of finding them, bringing them to the lab, and running them through expensive simulations was prohibitive. Now, however, a new test of supertasking ability—this one to be administered online—should make examining the problem much easier.

Teaming up with psychologists from the University of Newcastle in Australia, Strayer and his team at the University of Utah have recently been working on a Web version of the supertasker challenge. This time, you’re not driving; you’re acting the part of a bouncer in a club, asked to let in cool people and keep out uncool ones. To make your decisions, you have to rely on both visual and auditory cues, managing constantly opening doors as quickly as you can to keep the club exclusive. The researchers are about to submit a paper explaining their initial results: out of the approximately two hundred and fifty individuals who took the test as part of the initial study, only seven appear to perform at supertasker levels. (I took the test and failed completely. I was in agony by round five, only to realize that I had fifteen more to go.)

The prospect of an Internet test for supertasking is enticing. “Now that we have the Internet version, and everyone who wants to can sign up and test themselves, we can have thousands of people testing,” Strayer says. “It takes a lot of time to find them, but now we will finally have the numbers we need.”

So what are we going to learn from them, exactly? For one, Strayer thinks, that the ability is probably genetic to a large extent. You are either born with the neural architecture that allows you to overcome the usual multitasking challenges, or you aren’t. Already, with their admittedly limited sample, Strayer and his team have found that supertaskers exhibit different patterns of neural activation when multitasking than most of us. There is less activity in those frontal regions—the frontopolar prefrontal cortex, the dorsolateral prefrontal cortex, and the anterior cingulate cortex—that have been implicated in multitasking and executive control in the past. Supertasker brains, in other words, become less, not more, active with additional tasks: they are functioning more efficiently. “Their brains are doing something we can’t do,” Strayer says. With additional participants, not only can the Utah team more deeply examine these initial findings but they can also supplement them with genetic work, something that is impossible to do without a very large base sample; that is, a big enough chunk of the population that can serve as the basis of comparison. (Cassie, as it turns out, isn’t simply an élite multitasker. She is an outlier in her chosen profession: when he met her, she was training to try out for the British Olympic team in cycling. Strayer believes that supertaskers may naturally gravitate toward fields that reward those who can juggle multiple inputs exceptionally well—the high-end restaurant chefs or football quarterbacks of the world.)

The flip side, of course, is that, for the ninety-seven and a half per cent of us who don’t share the requisite genetic predisposition, no amount of practice will make us into supertasking stars. In separate work from Stanford University, a team of neuroscientists found that heavy multitaskers—that is, those people who habitually engaged in multiple activities at once—fared worse than light multitaskers on measures of executive control and effective task switching. Multitasking a lot, in other words, appeared to make them worse at it. (In his earlier work, Strayer didn’t find that drivers who were used to talking on their phones while driving performed any better on multitasking measures than those who weren’t. Laboratory practice didn’t help improve their test scores, either.) “In these particular tasks, you can’t get much of a practice effect,” Strayer says.

The irony of Strayer’s work is that when people hear that supertaskers exist—even though they know they’re rare—they seem to take it as proof that they, naturally, are an exception. “You’re not,” Strayer told me bluntly. “The ninety-eight per cent of us, we deceive ourselves. And we tend to overrate our ability to multitask.” In fact, when he and his University of Utah colleague, the social psychologist David Sanbomnatsu, asked more than three hundred students to rate their ability to multitask and then compared those ratings to the students’ actual multitasking performances, they found a strong relationship: an inverse one. The better someone thought she was, the more likely it was that her performance was well below par.

At one point, I asked Strayer whether he thought he might be a supertasker himself. “I’ve been around this long enough I didn’t think I am,” he said. Turns out, he was right. There are the Cassies of the world, it’s true. But chances are, if you see someone talking on the phone as she drives up to the intersection, you’d do better to step way back. And if you’re the one doing the talking? You should probably not be in your car.

~ Photograph: C. J. Burton/Corbis.

Thursday, May 15, 2014

A New Adaptive Videogame for Training Attention and Executive Functions

http://www.frontiersin.org/files/Articles/72315/fpsyg-05-00409-HTML/image_m/fpsyg-05-00409-g003.jpg

This is an interesting article from Frontiers in Psychology: Cognition on the use of videogames to "train" attention and executive function in subjects. The purpose of this particular study was to develop an adaptive videogame to support rehabilitation in those who have suffered traumatic brain injury (TBI), the primary symptoms of which is the impairment of attention and executive functions.

Despite the apparent simplicity of the game interface, the results confirmed that the cognitive abilities the game was supposed to enhance were activated and, further, the results suggested that training improved attentional control during play.

Full Citation: 
Montani V, De Filippo De Grazia M and Zorzi M. (2014, May 13). A new adaptive videogame for training attention and executive functions: Design principles and initial validation. Frontiers in Psychology: Cognition; 5:409. doi: 10.3389/fpsyg.2014.00409

A new adaptive videogame for training attention and executive functions: design principles and initial validation

Veronica Montani [1], Michele De Filippo De Grazia [1] and Marco Zorzi [1,2,3]
1. Department of General Psychology, University of Padova, Padova, Italy
2. Center for Cognitive Neuroscience, University of Padova, Padova, Italy
3. IRCCS San Camillo Neurorehabilitation Hospital, Venice Lido, Italy
A growing body of evidence suggests that action videogames could enhance a variety of cognitive skills and more specifically attention skills. The aim of this study was to develop a novel adaptive videogame to support the rehabilitation of the most common consequences of traumatic brain injury (TBI), that is the impairment of attention and executive functions. TBI patients can be affected by psychomotor slowness and by difficulties in dealing with distraction, maintain a cognitive set for a long time, processing different simultaneously presented stimuli, and planning purposeful behavior. Accordingly, we designed a videogame that was specifically conceived to activate those functions. Playing involves visuospatial planning and selective attention, active maintenance of the cognitive set representing the goal, and error monitoring. Moreover, different game trials require to alternate between two tasks (i.e., task switching) or to perform the two tasks simultaneously (i.e., divided attention/dual-tasking). The videogame is controlled by a multidimensional adaptive algorithm that calibrates task difficulty on-line based on a model of user performance that is updated on a trial-by-trial basis. We report simulations of user performance designed to test the adaptive game as well as a validation study with healthy participants engaged in a training protocol. The results confirmed the involvement of the cognitive abilities that the game is supposed to enhance and suggested that training improved attentional control during play.


Introduction


Cognitive enhancement through videogame playing is a hot topic in cognitive science. Most of the literature on the effect of videogame play is centred on “action” videogames, which are remarkably challenging in terms of visual and attention demands. Indeed, many investigations have focused on the modulation of visual skills and have revealed that videogame players (VGPs) outperform non-videogame players (NVGPs) on a variety of visuo-attentional tasks (Green and Bavelier, 2003, 2006a; for reviews see Spence and Feng, 2010; Boot et al., 2011; Hubert-Wallander et al., 2011a; Latham et al., 2013). For example, VGPs showed to be better in localizing the target in many different visual search tasks (e.g., Castel et al., 2005; West et al., 2008; Hubert-Wallander et al., 2011b), they were better in suppressing irrelevant information (e.g., Mishra et al., 2011; Wu et al., 2012) and in general they showed to have more available attentional resources (e.g., Green and Bavelier, 2003, 2006b; Dye et al., 2009a).

Nevertheless, there is also evidence that videogame playing enhances a variety of other cognitive skills (Green and Bavelier, 2003; Dye et al., 2009a; Anguera et al., 2013) and that cognitive processes different from visuo-spatial ability might benefit from playing more strategic games (e.g., Basak et al., 2008). For example, Colzato et al. (2010) reported that VGPs suffer smaller task switching cost than NVGPs, suggesting that they have better cognitive control (see also Cain et al., 2012; Strobach et al., 2012). Karle et al. (2010) suggested that the smaller switch cost is the consequence of more efficient task reconfiguration due to a superior ability to control attentional resources (also see Meiran et al., 2000).

Action videogame playing also seems adequate for training executive control skills that are crucial for the coordination of different tasks in complex situations. For example, Strobach et al. (2012) showed that VGPs outperformed NVGPs in a dual task condition (but see Donohue et al., 2012, for contrasting results) and, even more convincingly, that non-gamers trained with an action videogame suffered less dual-task cost after training in comparison to non-gamers trained with a puzzle game. It is worth nothing that the latter result was confirmed in the study of Chiappe et al. (2013) using a more complex task that was shown to predict performance in real-life settings.

Selective and controlled aspects of attention appear to benefit more of videogame playing relative to transient, automatic aspects (Chisholm et al., 2010). Clark et al. (2011) suggested that better performance of VGPs is explained by an improvement in higher-level abilities such as attentional control, in addition to better bottom-up visual processing. Accordingly, a neuroimaging study confirmed lesser recruitment of the network associated with the control of top-down attention in VGPs, despite their superior performance in a visual search task relative to NVGPs (Bavelier et al., 2011). This result was interpreted as evidence that VGPs are more efficient in the allocation of attention.

Studies comparing VGPs and NVGPs on many different tasks invariably show that VGPs are faster across a wide range of tasks and they do not show speed-accuracy trade-offs (Dye et al., 2009b; but see Nelson and Strachan, 2009). Moreover, videogame training was shown to be a helpful training regimen for providing a marked increase in speed of information processing in elderly (Drew and Waters, 1986; Clark et al., 1987; Anguera et al., 2013).

It is worth noting that most of these studies do not establish a causal link between videogame play and cognitive enhancement because they do not control for pre-existing differences between VGPs and NVGPs (Kristjánsson, 2013). However, some studies have compared the performance of two groups of non-players before and after a different type of training. For example, an action videogame was compared to a game that made heavy demands on visuomotor coordination but, unlike action video games, did not require the participant to process multiple objects at once at a fast pace. Action-trained participants showed greater training-induced improvements than participants trained on a control game, thereby showing that the benefits of play are trainable to a non-game player population (Green and Bavelier, 2003, 2006a,b; Feng et al., 2007; Strobach et al., 2012). There is also some evidence that learning/enhancement is not specific to the trained task but there is some degree of generalization to untrained aspects (Green and Bavelier, 2006b; Mathewson et al., 2012) and some transfer to a completely different and more “ecological” domain (Gopher et al., 1994; Rosenberg et al., 2005; see Boot et al., 2011, for a critical discussion).

The aim of the present study was to develop a novel adaptive videogame for training attention and executive functions, with particular emphasis on design features that make the game suitable for brain-damaged patients as a tool to support cognitive rehabilitation. Despite some contrasting findings (Boot et al., 2008; Murphy and Spencer, 2009; Irons et al., 2011), videogames seem to enhance a variety of cognitive skills and they appear to be a promising tool to train cognitive abilities (e.g., Achtman et al., 2008; Basak et al., 2008; Anguera et al., 2013; Franceschini et al., 2013). Moreover, neuroplasticity in the adult brain could be guided with specific training to yield better recovery (e.g., Krainik et al., 2004; Gehring et al., 2008). The rationale for designing a new videogame, despite the great variety of commercial videogames that are currently available, was twofold. First, designing a novel videogame allows the inclusion of specific features in a theory-driven manner as well as to implement a fine control of the difficulty dimensions, including trial-by-trial adaptation to user performance. Second, the graphical user interface of commercial videogames might be too demanding for patients with cognitive deficits in terms of speed, visual complexity, or motor requirements.

Before presenting the videogame, we start with a discussion of the theoretical principles that guided our design choices in terms of structure and features of the game. We then report a modeling study in which we simulated users with different abilities to assess the efficiency of the adaptive algorithm in estimating the “performance space” of the user, which is crucial for the online adjustment of game difficulty. Finally, we validated the game with unimpaired participants (healthy young adults) to ensure that the game involves the activation of the desired cognitive functions as well as to assess the effect of a short training period (<10 h over 2 weeks). Note that the evaluation of videogame training for the rehabilitation of brain damaged patients is left to a future clinical trial.

Game Design Principles

Dysexecutive syndrome and attention deficits are common consequences of traumatic brain injury (hereafter TBI; e.g., Levine et al., 1998; Stuss and Levine, 2002). Indeed, the acceleration-deceleration mechanism of traumatic injury implies that the frontal and temporal lobes are the most frequent damaged sites, with subsequent impairment of a wide range of high-level functions (Povlishock and Katz, 2005). The resulting impairments in attention and executive functions can profoundly affect an individual’s everyday cognition, with difficulties in the management of very simple daily activities (Sohlberg and Mateer, 2001). Attention deficits have been found to be significantly correlated with the inability to return to work (Van Zomeren and Brouwer, 1985; Vikki et al., 1994). Because of the related disabilities and the increasing number of people suffering from this pathology, the development of effective rehabilitation strategies should be considered of high priority. Furthermore, the recent finding of Kamke et al. (2012) that increased visual attention demands entail a decrease in motor cortex plasticity strongly supports the notion that attention can be a potent modulator of cortical plasticity.

The design of the game was guided by principles relevant for the rehabilitation of cognitive deficits in TBI patients. The first principle was to enhance mental flexibility, which is the ability to respond to environmental changes in an efficient way. Mental flexibility implies efficient deployment of attentional resources accordingly with the context, as to select and maintain the cognitive set that is appropriate for the current goal. In order to increase mental flexibility, training should engage patients in switching between different cognitive sets. The alternation of different tasks requires reconfiguration of the new task and inhibition of the current active set, that is the set of the previous task (Monsell, 2003). Switching can be predictable or unpredictable (e.g., Andreadis and Quinlan, 2010). If the tasks alternate in a predictable way, participants can take benefit of the information about the switch and consequently prepare the switch endogenously. If the tasks alternate in a random way (i.e., unpredictable switch), switching task requires a faster reconfiguration of the mental set that is exogenously triggered by the task itself. Overall, unpredictable switching is considered more demanding than predictable switching but since TBI patients seem to have problems in the endogenous engagement of attention (Stablum et al., 1994) as well as slow information-processing speed (e.g., Mathias and Wheaton, 2007), they can benefit from training with both types of switching. Therefore, training should initially involve predictable switching and then progress to unpredictable switching.

Patients have also problems with managing two simultaneous tasks (Sohlberg and Mateer, 2001). The multitasking deficit can be ascribed to their slower processing speed (Dell’Acqua et al., 2006; Foley et al., 2010) or to a specific impairment in the ability to divide attention (Serino et al., 2006). There is evidence that dual task training improves the ability to divide attention by speeding up information processing through the bottleneck in the prefrontal cortex (Dux et al., 2009). Finally, increasing attentional load induced by multitasking has been shown to hinder visuo-spatial monitoring in patients with right hemisphere stroke (Bonato et al., 2010, 2012, 2013). Regardless of the specific mechanism underlying the deficit, extensive training with dual tasking can greatly reduce multitasking cost (Van Selst et al., 1999; Schumacher et al., 2001; Tombu and Jolicoeur, 2004). Therefore, a second important principle that should guide the design of game training is to improve the ability to achieve different goals at the same time. Dual-tasking requires to maintain the cognitive sets of both the tasks, dividing attentional resources between the two goals.

Including both tasks switching and dual-tasking within the training may be considered as a reflection of the complexity of daily living. In a more ecological environment, the individual has often to manage with situations that require to quickly change the goals or to pursue two goals simultaneously. Flexible or integrated training regimens, requiring constant switching of processing and continuous adjustments to new task demands have also been claimed to lead to greater transfer (Bherer et al., 2005).

The third principle that should guide the design of a game for cognitive training is to stimulate planning ability. Indeed, disorganized behavior of TBI patients is another aspect of their poor ability to control cognitive resources. They are not able to maintain the intentions in goal directed behavior, likely because the sustained attention system is compromised. This results in a high level of distractibility and a cue-dependent behavior (Levine et al., 2011). Flexibility in planning and strategy selection should be promoted by trial-by-trial changes of the game playground, thereby requiring the gamer to manage a novel situation every time. This implies that achieving the goal would require to choose the adequate strategy, with the interruption of automatic responses and monitoring of the performance, accordingly with the task. Consequently, the gamer would need to plan the correct sequence of actions to achieve the goal and to actively maintain this set of actions.

Patients’ performance tends to be more variable and less consistent over time in comparison to healthy controls (Stuss et al., 1989, 1994). A critical challenge is to organize the progression of practice in a way that promotes performance improvement while finding a balance between patient variability and the choice of optimal task difficulty. Moreover, TBI patients are often unaware of their impairments (Prigatano and Schacter, 1991) and their anosognosia is a further challenge because rehabilitation can be seriously hindered by the lack of patient cooperation. Anosognosia predicts recovery from stroke (Gialanella and Mattioli, 1992) and experience-dependent plastic reorganization requires attention to be paid to the activity in question (Recanzone et al., 1993). Therefore, an important principle is to maintain attention and motivation providing sufficient positive reinforcement. Videogames are a useful tool because they are more entertaining than other training programs but in order to maximize the benefit they should be equipped with an adaptive algorithm. Motivation for playing can be maintained by programming the algorithm to adapt the difficulty of the game to a level that is challenging but feasible, for example by keeping the probability of success around 0.75. The ability to complete the task gives a “reward” to the gamer that may enhance his/her motivation. Moreover, the adaptive difficulty is an important aspect in enhancing training effects (Holmes et al., 2009; Brehmer et al., 2012).

Finally, every task should be completed in a pre-determined amount of time, accordingly with the difficulty of the task. The time pressure acts to encourage speeding up of processing, as consistently shown in the literature on videogame playing (Dye et al., 2009b; Hubert-Wallander et al., 2011a).


The Game: “Labyrinth”


Overall Game Design

A little man moves along a maze to reach a goal. The game character is controlled by the gamer through a joystick. The walls that form the maze are variable: both their quantity and their location change at every trial accordingly with the task difficulty. The only constraint in the random distribution of the walls is that the software avoids the appearance of closed areas because this may prevent goal achievement.

The maze difficulty changes accordingly with the type of task. Indeed, the game includes two different tasks, the “Diamond Task” (hereafter DT) and the “Snake Task” (hereafter ST). Overall, every task has eight difficulty levels, across a continuum ranging from the less demanding (level 1) to the more demanding (level 8). In the DT (see Figure 1), the easiest maze is the one with as few walls as possible and the number of walls increases in conjunction with the improvement of performance. Conversely, in the ST (see Figure 2), the easiest maze is the one with as many walls as possible and accordingly, the number of walls decreases with the improvement of performance.
FIGURE 1

http://www.frontiersin.org/files/Articles/72315/fpsyg-05-00409-HTML/image_m/fpsyg-05-00409-g001.jpg

FIGURE 1. Diamond task. The goal is to collect all diamonds within the time limit.

FIGURE 2
http://www.frontiersin.org/files/Articles/72315/fpsyg-05-00409-HTML/image_m/fpsyg-05-00409-g002.jpg

FIGURE 2. Snake task. The goal is to avoid to be caught by the snake and reach a “shelter” house that appears at a random location.
The goal of the game character depends on the nature of the current task. In the DT, the man has to collect the diamonds that are randomly distributed across the play area. The DT resembles the open-ended version of the Travelling Salesman Problem (TSP), a task that strongly involves planning and is also representative of many real-world situations (Cutini et al., 2008). Given a set of spatial locations represented by points on a map, the task consists in finding an itinerary that visits each point exactly once, ensuring that total traveled distance is as short as possible. While the classic TSP requires to return to the starting point, the open-ended version introduces a distinction between start- and end-point so that participants have to perform an open path instead of a loop. TPS can be solved with multiple close-to-optimal solutions and usually healthy participants change strategy during the pathway to optimize performance. Therefore, the task achievement requires controlling and modifying the plan accordingly with the evaluation of both the current position and the remaining path. Basso et al. (2001) showed that TBI patients tend to use a fixed strategy until the end of the task without considering the alternative options, consistent with the hypothesis that TBI patients are unable to inhibit the current strategy in order to chose a better one (also see Cutini et al., 2008, for a computational model of normal and impaired performance in the TSP). In the DT, the number of diamonds ranges from one, in the less demanding level, to eight in the more demanding level. The achievement of the goal requires the participant to plan a route that allows to collect every diamond within the time limit. Usually the best overall strategy is to follow the shortest path passing through the diamonds.

In the ST, the man has to avoid to be caught by a snake and to reach a “shelter” house that appears at a random location (see Figure 2). The range of difficulty is enforced by controlling the running speed of the snake, as well as the time limit for trial completion. The achievement of this task requires a very different strategy compared to the diamond task. The best strategy is sometimes just the opposite: indeed, if the man takes the shortest way to arrive at the shelter house, it is likely that the snake will catch him. Avoiding to be caught often requires to choose a longer route, sometimes moving even in the direction opposite to the house location. Likewise, depending on the location of the house and the disposition of the maze walls, another good strategy may be to stop for a while, in a strategic location, waiting for the snake to take a wrong route. In this way, reaching the house becomes possible provided that the gamer chooses the right timing and moves quickly. Basically, the task requires “to trick” the snake. Therefore, accomplishment of the tasks requires adopting complex strategies involving the ability to plan and sometimes also inhibiting the most “automatic” action.

The DT and ST alternate between each other with a frequency that is adjusted according to the performance score. The difficulty of this “switch condition” has four levels ranging from a completely predictable switching, when one task follows the other, to a completely random switch. The two medium levels involve a switch every two trials and a switch every three trials, respectively. In some trials, the gamer has to perform the two tasks simultaneously (see Figure 3). In these trials the participant has to avoid the snake and to collect the diamonds at the same time. Contrary to the standard ST, in this case the shelter house appears only after all diamonds are collected. Overall, the successful performance requires reaching two simultaneous goals: collecting every diamond and avoiding the snake within the time limit. The dual task condition is administered only if the percentage of success is higher than 60%. When the gamer achieves this performance level, the probability to receive a dual task trial is 30%. In this way, the participant can reach enough expertise in the two single tasks before managing the more difficult dual task condition. If the trial is performed correctly the player receives some points, whereas if the participant fails to reach the goal some points are subtracted from the score. Every six trials the gamer receives a feedback concerning his/her performance.
FIGURE 3
http://www.frontiersin.org/files/Articles/72315/fpsyg-05-00409-HTML/image_m/fpsyg-05-00409-g003.jpg

FIGURE 3. Dual task. In these trials the goal is to avoid the snake and collect the diamonds at the same time. The shelter house appears only when all diamonds have been collected.

Adaptive Dimensions

Following Wilson et al. (2006) we used a multidimensional learning algorithm for continuous, online adaptation of task difficulty to the current performance of the gamer. Adaptation was implemented using three dimensions of difficulty:
(1) Time limit: the time limit to perform the task. The level of difficulty is ranging from 5 to 100 s. It is updated every trial.

(2) Task difficulty: overall it has eight levels but the difficulty depends on the task. In the DT it is related to the number of diamonds that have to be collected (from one to eight), while in the ST it is related to the snake speed. In both tasks the difficulty consists also in the number of walls of the maze (see Overall Game Design). It is updated every trial.

(3) Switch condition: the type of switch, predictable vs. unpredictable. It has four levels (every trial, every two, every three, random). This dimension is updated every 12 trials.
The combination of the three dimensions forms the “training space.” This can be described as a cube with the three dimensions of difficulty as sides (Wilson et al., 2006). Every trial corresponds to a point within this cube (with the coordinates defined by the values of the three difficulty dimensions) and every point is associated with a certain probability of success. Higher probability is associated with easy trials and the opposite for the hard trials. Each user will be associated with a different probability of success matrix that defines the individual “performance space”. For example, a patient who is more impaired in inhibiting automatic responses and less impaired with speed of processing will have a higher probability of success in the “time” dimension and lower probability of success in the “task difficulty” dimension.

The task of the algorithm is to estimate the performance space of the user accordingly with the current performance. After sampling points within the training space, the algorithm uses the responses of the player to build an interpolated model of the entire performance space. Then, it selects a random point in the space which it estimates to correspond to the level required to maintain performance at 75% of accuracy (Figure 4). Moreover, with the game advancing, the algorithm updates the performance space accordingly with the success or failure of the gamer.
FIGURE 4 
http://www.frontiersin.org/files/Articles/72315/fpsyg-05-00409-HTML/image_m/fpsyg-05-00409-g004.jpg

FIGURE 4. Performance of the adaptive algorithm in ensuring a defined level of success in simulation testing. The graph shows the gamer’s success rate (measured as a running average over the last 20 trials) as a function of trial number. Note that the algorithm adapted to the ability of the gamer in less than 100 trials and then kept the success rate at the desired level of 75%.

Simulation

In order to test the algorithm, performance in the game was simulated with a Matlab model (http://www.mathworks.co.uk/). The simulator represented the performance space of the gamer at a given moment by a matrix of the success probability, as in the adaptive algorithm. The subject’s performance space was characterized by a “performance threshold,” that is the set of coordinates which specified the high success zone (in which the probability of success is 100%). Outside the high success zone, the probability of success for a given type of game trial was calculated by determining the distance between its location and the subject threshold and applying a sigmoid function to this distance (Wilson et al., 2006). If the trial location is far from the threshold the probability to be successful at this level of difficulty will be low or zero, whereas if the trial location is close to the threshold the probability to be successful will be high. The “performance threshold” could move up simulating the improvement of performance as a consequence of the training. In the simulator, learning rate (LR) was assumed to be a function of the derivative of the sigmoid (Wilson et al., 2006). For example, if the gamer has a successful performance in a trial far away from the threshold, her performance has a fast LR.

The first simulation was carried out with a virtual gamer who has a fixed level of performance and zero LR. The aim of the simulation was to test if the algorithm was able to develop an accurate model of the gamer ability. In Figure 5A, the ability of the algorithm to estimate the performance of four different gamers is represented on a trial by trial basis. At the beginning of the game the algorithm cannot reliably estimate the different performance spaces. After 100 trials, the estimates diverge and then reach the specific level of performance corresponding to the fixed limit set for each simulated gamer. Figure 6 shows a tridimensional representation of the performance space of three different virtual gamers (with fixed limit of performance).
FIGURE 5
http://www.frontiersin.org/files/Articles/72315/fpsyg-05-00409-HTML/image_m/fpsyg-05-00409-g005.jpg

FIGURE 5. Simulation testing the efficacy of the adaptive algorithm to accurately estimate a model of the gamer ability. (A) The estimated performance space of a virtual gamer who has a fixed level of performance and zero learning rate. The four virtual gamers have different performance limits, ranging from 1 (which implies 100% probability of success in the entire performance space) to 0.4 (which implies 100% probability of success in 40% of the performance space). After 100 trials, the algorithm could estimate fairly well the performance space of the gamer as defined by the simulator and it could clearly distinguish between different gamers with different levels of performance. (B) Simulation carried out to test if the algorithm can distinguish between gamers with different levels of learning rate (LR). The algorithm was able to adjust the rate of increase in difficulty as a function of the learning rate of the different simulated gamers.
FIGURE 6 
http://www.frontiersin.org/files/Articles/72315/fpsyg-05-00409-HTML/image_m/fpsyg-05-00409-g006.jpg

FIGURE 6. Simulation of gamers with different performance limits. Performance space estimated by the algorithm after 100, 300, and 500 simulated trials, shown as three-dimensional cube, for three different virtual gamers with fixed limit of performance and zero learning rate (top row: limit = 0.4; middle row: limit = 0.6; bottom row: limit = 0.8). The red area represents high probability of success.
The second simulation investigated the algorithm’s ability to distinguish between gamers with different levels of LR (Figure 5B). The performance of the gamer with zero LR does not change over time. Conversely, the slope of the performance of the gamers with higher LRs becomes steeper accordingly with the rate of increase. As shown in Figure 5B, the algorithm was able to adjust the rate of increase in difficulty as a function of the LR of the different simulated gamers. Figure 7 shows the performance space of three different gamers, with different LRs for the three dimensions (i.e., time limit, task difficulty and switch condition). For each gamer, LR for one dimension was set to zero (i.e., the gamer does not learn at all) and the LRs for the other two dimensions were set to 1 (i.e., the gamer learns quickly). It is possible to appreciate how the estimate of the algorithm changes accordingly with the characteristic of the gamer. The probability of success expands rapidly for the two dimensions with high LR, whereas it does not change for the dimension with zero LR.
FIGURE 7
http://www.frontiersin.org/files/Articles/72315/fpsyg-05-00409-HTML/image_m/fpsyg-05-00409-g007.jpg

FIGURE 7. Simulation of gamers with different learning rates. Performance space estimated by the algorithm after 100, 300, and 500 simulated trials, shown as three-dimensional cube, for three different virtual gamers with null initial performance space and different learning rate (LR) for the three dimensions (top row: LR = 0 for X dimension and LR = 1 for Y and Z dimensions; middle row: LR = 0 for Y dimension and LR = 1 for X and Z dimensions; bottom row: LR = 0 for Z dimension and LR = 1 for X and Y dimensions). The red area represents high probability of success. Note that the performance space does not expand through the dimension with zero learning rate.

Validation of the Game with Unimpaired Participants

The videogame “Labyrinth” has been conceived as a tool for training specific skills. The goal of the validation study was to test the new videogame with unimpaired participants. A group of healthy young adults was engaged in a training protocol which involved daily 40 min play sessions with the videogame for 2 weeks.

We also sought to establish that the game practice involves the targeted abilities by evaluating the presence of the dual task effect and the task switching effect in the different dependent measures of the game during the first play session. If the alternation between DT and ST works as switch condition we should observe a cost in the participants’ performance when one task is followed by the other task relative to when it is followed by the same task (Monsell, 2003). Usually the cost consists in worse accuracy in the new task relative to the repeated one and/or in slower RTs in the new task relative to the repeated one. Likewise, performing the two tasks at the same time should be more difficult than performing a single task, thereby revealing the cost of multi-tasking.

Videogame output is quite different from that of classic experimental paradigms based on choice reaction times. We extracted three different performance measures from the videogame that became the dependent variables of our analyses. The three types of score were:
(1) Success rate: whether the task was completed with success or not, within the time limit;
(2) Overall time: the time taken to complete the task;
(3) Diamond Time (DT): the time to collect the first diamond;
The DT measure is closer to the trial onset than the other two measures and collecting the first diamond is clearly an immediate and objective sub-goal of the task. Therefore, it should be more sensitive in uncovering effects that might be otherwise undetectable.

Note that the first two measures cannot be used to evaluate the effect of training across sessions because the adaptive algorithm keeps the performance level around 75% by continuously changing the different adaptive dimensions. Nevertheless, we assessed the participants’ progress across sessions in terms of task difficulty level and time limit (see Adaptive Dimensions above). We predicted a trend toward increasing difficulty level and decreasing time limit across sessions as a marker of improved performance in the videogame during training. Moreover, we assessed the effect of training on dual tasking and task switching performance using the DT measure, because the latter is not influenced by the choices of the adaptive algorithm. The time taken to collect the first diamond was compared between single and dual-task conditions (i.e., dual task cost), as well as between repeated and new task conditions (i.e., task switching cost). We expected a decrease of both costs across training sessions.


Method


Participants

Twenty undergraduate students from the University of Padua participated in the study. Their mean age was 20.8 with range of 19–25 years. They had normal or corrected-to-normal vision.

Apparatus, stimuli, and procedure

The videogame “Labyrinth” was installed on the personal computer of each participant. Given that the participants were healthy young adults, we set lower bounds for the level of difficulty (level 3) and the time limit (25 s). The training period was 14 days long. Participants played with the game for 40 min everyday. The duration of the daily training session was enforced by self-termination of the game. The individual performance space estimated by the adaptive algorithm (see Adaptive Dimensions) was saved at the end of the session and reloaded at the beginning of the next session. This ensured that the difficulty of the game was immediately restored to the level achieved in the previous play session. Total play time across the 14 sessions was 9 h and 30 min.


Results


First, we analyzed the data collected in the first session of game playing. The aim of this analysis was to assess the presence of the dual task effect and the switch effect. We performed analysis of variance with the type of task as within-subjects factor. The game performance trend across the training sessions was analyzed using mixed-effects multiple regression models (Baayen et al., 2008). Data were analyzed in the R environment (R Core Team, 2013) using ez package (Lawrence, 2013), lme4 package (Bates et al., 2013), afex package (Singmann, 2013), and lmerTest package (Kuznetsova et al., 2013).

Dual task effect

The effect of dual task was assessed on success rate and DT. Overall time was not used because the dual task condition requires an additional time-consuming operation (i.e., reaching the shelter house) with respect to the diamond task.

Success rate. The effect of the type of task, single vs. dual, was significant, F(1,16) = 311.42, p < 0.001, η2G = 0.91, indicating that in the dual task condition participants were less successful than in the single task condition (see Figure 8A). For example, the player was caught by the snake more often in the dual task than in the snake task, F(1,16) = 33.31, p < 0.05, η2G = 0.46.
FIGURE 8  
http://www.frontiersin.org/files/Articles/72315/fpsyg-05-00409-HTML/image_m/fpsyg-05-00409-g008.jpg

FIGURE 8. Dual task effect. The comparison between single task and dual task conditions is shown for success rate (A), and time to collect the fist diamond (B). Error bars are within-subjects confidence intervals calculated with the method of Morey (2008).
Diamond time. The effect of the type of task, single vs. dual, was significant, F(1,16) = 36.21, p < 0.001, η2G = 0.47, indicating that the time to collect the first diamond in the dual task condition was longer than in the single task condition (see Figure 8B).
Task switch effect

Success rate. The effect of the type of task, new vs. repeated, was significant, F(1,16) = 9.35, p < 0.01, η2G = 0.35, indicating that participants were less successful in trials involving a change of task relative to trials in which the task remained the same, that is a task switching cost (see Figure 9A).
FIGURE 9
http://www.frontiersin.org/files/Articles/72315/fpsyg-05-00409-HTML/image_m/fpsyg-05-00409-g009.jpg

FIGURE 9. Task switch effect. The comparison between repetition and switch conditions is shown for success rate (A), overall time to complete the task (B), and time to collect the fist diamond (C). Error bars are within-subjects confidence intervals calculated with the method of Morey (2008).
Overall time. The effect of the type of task, new vs. repeated, was significant, F(1,16) = 25.08, p < 0.001, η2G = 0.09, indicating that participants were slower in completing the task for trials involving a change of task relative to trials in which the task remained the same (see Figure 9B).

Diamond time. The effect of the type of task, new vs. repeated, was significant, F(1,16) = 83.11, p < 0.001, η2G = 0.25, indicating that participants were slower to collect the first diamond for trials involving a change of task relative to trials in which the task remained the same (see Figure 9C).

Effect of training

We assessed the presence of a training effect within the game (i.e., performance improvement as a function of training time) in terms of changes in task difficulty level and time limit selected by the algorithm across the 14 sessions. Moreover, we assessed if the dual task and the task switching performance in the DT measure improved during the training. We employed mixed-effect multiple regression models (Baayen et al., 2008). By-subject random intercepts were included in all analyses. For the analyses of task difficulty and time limit we applied a logarithmic link function (Jaeger, 2008) and Poisson variance distribution that is appropriate for counts of events in a fixed time window (e.g., Baayen, 2008). For the DT analysis we performed Type III test calculating p-values via the likelihood ratio test in order to assess the significance of the main effects and the interactions of the predictors (i.e., session and condition).

Task difficulty. The effect of the session was significant (b = 0.0021, z = 4.59, p < 0.001), indicating that the task difficulty increased (positive beta weight) across the sessions. In the last session, the participants reached a mean difficulty level of 4.67 (SD = 0.14).

Time limit. The effect of the session was significant (b = -0.0040, z = -15.90, p < 0.001), indicating that the time limit decreased (negative beta weight) across the sessions. In the last session, the participants reached a mean time limit of 15.95 (SD = 0.62).

Diamond time: dual task effect. The main effect of session was significant, χ2(1) = 135.71, p < 0.001, indicating that the time to collect the first diamond decreased across sessions. The main effect of condition (single vs. dual) was significant, χ2(1) = 749.41, p < 0.001, indicating that the DT in the dual task condition was longer than in the single task condition. The interaction session by condition was significant χ2(1) = 80.73, p < 0.001, indicating that the effect of the session was different for the two conditions. The interaction was inspected by changing the reference level accordingly with the desired contrast. The decrease in DT was significant for both conditions, but the reduction was larger for the dual task condition as attested by the larger (negative) beta weight (b = -8.19, t = -3.93, p < 0.001 and b = -63.35, t = -10.98, p < 0.001 for single and dual task conditions, respectively).

Diamond time: task switch effect. The main effect of session was significant χ2(1) = 40.33, p < 0.001, indicating that the DT decreased across sessions. The main effect of condition (new vs. repeated) was significant χ2(1) = 105.98, p < 0.001, indicating that participants were slower to collect the first diamond for trials involving a change of task relative to trials in which the task remained the same. The interaction session by condition was significant χ2(1) = 9.45, p < 0.01, indicating that the effect of the session was different for the two conditions. The interaction was inspected by changing the reference level accordingly with the desired contrast. The decrease in DT was significant for both conditions, but the reduction was larger for the switch (new task) condition as attested by the larger (negative) beta weight (b = -6.79, t = -2.07, p < 0.05 and b = -19.56, t = -7.70, p < 0.001, for repeated and new conditions, respectively).


Discussion


The aim of this experiment was to validate the game “Labyrinth” in a study on unimpaired participants. Playing a game with these characteristics is likely to involve many different cognitive skills, some more basic, and some of a higher level. For example, successful playing requires selecting the relevant information and discarding the irrelevant ones. Playing until the end of the session requires to sustain attention at an adequate level for a relatively long time. Since the game was conceived to tap specific abilities, we first assessed whether playing the game involved these skills. In particular, we assessed whether the participants’ performance showed the cost of dual tasking and the cost of task switching to confirm the involvement of divided and alternate attention or flexibility.

The performance of the unimpaired participants in the first play session with the videogame showed the classic cost of dual task across the different performance measures. The success rate was higher in the single tasks than in the dual task condition. The dual task effect was confirmed also in the time dependent variable: the time to collect the first diamond was longer when the gamer had to collect the diamond and to avoid the snake at the same time compared to when she only had to collect diamonds. Therefore, the results confirm a robust dual task effect, thereby showing that completing the two tasks simultaneously requires to divide attention between the two goals (as well as between diamond and snake stimuli).

The analyses of the three performance measures also revealed a robust effect of task switching. In this case we compared the performance between the condition of repetition, when one task followed a task of the same type (e.g., DT after DT), with the condition of non-repetition, when one task followed a task of the other type (e.g., DT after ST). Success rate was higher in the condition of repetition than in the switch condition, in line with the findings using the classic task switch paradigm (Monsell, 2003). Likewise, the time to complete the task and the time to collect the first diamond showed a switch cost, with longer times for the switch condition compared to the repetition condition. Therefore, changing the task showed the need for reconfiguration or inhibition of the cognitive set of the prior task, thereby involving cognitive flexibility.

Overall, the performance improved throughout the training as indicated by the increase of task difficulty across sessions. This means that the algorithm moved the performance threshold toward more difficult levels because the participants became more skilled in the achievement of the goals. In the same vein, the maximum time allowed to accomplish the task decreased across sessions, indicating that participants became faster in the achievement of the goals. Moreover, using DT as performance index, we found that the cost of dual-tasking as well as the cost of task switching decreased during training. Though the time to collect the first diamond showed an overall decrease across sessions, the improvement was significantly stronger for the dual task condition than for the single task condition, thereby suggesting that players became more efficient in route planning under dual task. In the same vein, the comparison between repeated and new task conditions (i.e., task switching) showed a stronger performance improvement for the switch condition. These results suggest that playing with Labyrinth enhanced the participants’ attentional control, at least in terms of the ability to manage multitasking and to quickly reconfigure the task set. This finding is in line with studies showing that extensive dual task training enhances the ability of multitasking (Van Selst et al., 1999; Schumacher et al., 2001; Tombu and Jolicoeur, 2004).

The generalization beyond the task used for training is an important issue in the area of cognitive enhancement and rehabilitation. The training effect should transfer to other tasks to make the training really beneficial. We leave this issue to a follow-up study, but we believe that the characteristics of the game, for example the alternation between tasks as well as multitasking, may stimulate high levels attention functions as opposed to task specialization. Flexibility and control over attentional resources is clearly relevant in a variety of daily-life situations. An investigation of the relationship between videogame play and a comprehensive battery of cognitive / attentional tests would indeed clarify this issue (see Baniqued et al., 2013) and it would explicitly assess transfer to specific skills like task switching and multitasking.


Conclusion


There is a growing body of evidence that videogame playing can enhance a variety of specific skills in addition to speeding up information processing (e.g., Hubert-Wallander et al., 2011a). Moreover, gaming seems to promote transfer to more ecological settings and generalization to untrained skills. Here we attempted to design a new videogame including specific features that were conceived to specifically involve attention and executive functions, with the final purpose to use it in supporting the rehabilitation practice of TBI patients. Cognitive deficits following TBI can profoundly affect daily living (Sohlberg and Mateer, 2001) because they often involve executive and attentional functions that are fundamental to control and modulate other more basic abilities. The design of the game was guided by principles relevant for the training of those functions. Therefore, its aim was to enhance mental flexibility (switching between different cognitive sets) and multi-tasking (maintain the cognitive sets of two different tasks and dividing attentional resources between two goals), stimulate planning ability (choosing the adequate strategy, interrupting automatic responses and monitoring performance), and encourage speeding up of processing. Most importantly, the videogame was equipped with a multidimensional adaptive algorithm that provided a continuous, online calibration of the level of difficulty across three different dimensions to the gamer’s current performance. We believe that this latter feature is crucial for managing the performance variability of patients. The development of the game included different testing stages. In the first stage, we simulated users with different performance profiles to assess the efficiency of the adaptive algorithm in estimating the user ability. In the second stage of the testing phase, we validated the game with unimpaired participants to ensure that the game involves the activation of the desired cognitive functions as well as to assess the effect of a short training period. Thus, the next step will be to test the videogame in a controlled clinical trial with TBI patients to assess if it is useful for the remediation of attentional and executive impairments.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This study was supported by grants from the European Research Council (grant no. 210922) and the University of Padova (Strategic Project “NEURAT”) to Marco Zorzi.

References available at the Frontiers site