Pages

Wednesday, June 02, 2010

BPS Research - Inner words spoken in silence

It seems that we have no internal monitor of our speech before it becomes public - we don't know what we are saying, in essence, until we have said it. This is another piece of understanding how our brains function.

My sense of this is different, at least a little bit. I seem to be able to switch on and off a more formal language that I use in school, with clients, or at social gatherings. On the other hand, when I am just hanging with Jami or working out with a friend, my language is much less formal. I'm not sure how this relates to the study at hand, but it seems it might suggest that we have the ability to choose the quality of our language, if not the content.

Inner words spoken in silence

As the the words fall from your lips, it's the first you've heard of them. That is, you don't have a sneak preview of what your own words sound like before you utter them. That's according to Falk Huettig and Robert Hartsuiker who say their finding has implications for our understanding of the brain's internal monitoring processes.

The researchers took advantage of an established effect whereby the sound of a spoken word draws our eyes automatically towards written words that sound similar. Forty-eight Dutch-speaking undergrads were presented with a succession of line drawings, each of which appeared alongside three written words. The participants' task was to name out loud the objects in the drawings. Meanwhile the researchers monitored their eye movements.

On each trial, one of the written words sounded like the name of the drawn object - for example, for a drawing of a heart ('hart' in Dutch), the accompanying words were: harp (also 'harp' in English), zetel ('couch') and raam ('window'). As expected, after saying the word 'hart', the participants eyes were drawn to the word 'harp'. The key question was whether this happened earlier than in previous studies in which participants heard the target words spoken by someone else rather than by themselves. If we hear our own speech internally, before we utter it, then the participants' eyes should have been drawn to the similar sounding words earlier than if they'd heard another person's utterances.

In fact, the participants' eyes were drawn to the similar sounding words with a latency (around 300ms) that suggested they'd only heard their own utterances once they were public. There was no sneak internal perceptual preview.

It's important to clarify: we definitely do monitor our speech internally. For example, speakers can detect their speech errors even when their vocal utterances are masked by noise. What this new research suggests is that this internal monitoring isn't done perceptually - we don't 'hear' a pre-release copy of our own utterances. What's the alternative? Huettig and Hartsuiker said error-checking is somehow built into the speech production system, but they admit: 'there are presently no elaborated theories of [this] alternative viewpoint.'
_________________________________

ResearchBlogging.orgHuettig, F., & Hartsuiker, R. (2010). Listening to yourself is like listening to others: External, but not internal, verbal self-monitoring is based on speech perception Language and Cognitive Processes, 25 (3), 347-374 DOI: 10.1080/01690960903046926 [open access]

No comments:

Post a Comment