A new study is exploring how a person’s native language can influence the way the brain processes auditory words in a second language.
Because cues that signal the beginning and ending of words can differ from language to language, a person’s native language can provide misleading information when learning to segment a second language into words. Annie Tremblay, an assistant professor of linguistics at the University of Kansas, is trying to better understand the kinds of cues second language learners listen for when recognizing words in continuous speech. She also is studying how adaptive adult learners are in acquiring these new speech cues.
Working with a group of international collaborators in the Netherlands, Korea, and France, Tremblay received a three-and-half year, $259,000 National Science Foundation grant for the research.
“The moment we hear a new language, all of a sudden we hear a stream of sounds and don’t know where the words begin or end,” Tremblay said. “Even if we know words from the second language and can recognize them in isolation, we may not be able to locate these words in continuous speech, because a variety of processes affect how words are realized in context.”
For second language learners, some cues are easier to pick up than others, such as which consonants are common in starting and ending words. An example is the “z” sound, which is a common end to words in English but is not often found at the beginning of words.
Other cues, such as intonation, are harder to master and are more likely to be influenced by a speaker’s native language. Tremblay points to English where a stressed syllable is a strong indication that a new word is beginning. But in French the opposite is true; prominent syllables tend to be at the end of words.
“This kind of information can’t be memorized in a language such as French. It has to be computed. And this is where second language learners struggle,” Tremblay said.
An example of confusion is the French phrase for cranky cat, which in French is “chat grincheux.” For a brief second, the phrase can sound like the English pronunciation for “chagrin,” a word with French origins.
“If you hear the ‘cha’ syllable as being prominent, it cannot come from the word chagrin in French because the first syllable of chagrin will not be stressed in French,” Tremblay said.
With her international collaborators, Tremblay manipulates intonation cues similar to the example above to test how listeners use these cues to recognize words. In one experiment, participants hear a sentence containing a phrase such as ‘chat grincheux,’ see four word options on a computer screen such as chat, chagrin and two unrelated words, then are asked to click the correct word. An eye-tracking device determines when and how long the participant focuses on each word.
Another experiment has participants listen to an artificial, made-up language for 20 minutes. They are then asked to identify words in that language.
So far the research group has studied native English and Korean speakers who have learned French, and native French speakers who live in France or in the U.S.
One of the more interesting findings is that when languages share more similarities but still have slight differences, it can be harder for second language learners to use the correct speech cues to identify words. For example, in French and Korean, prominent syllables tend to be at the end of words. However, there is one small difference: Korean intonation drops before the next word begins. In French, intonation drops during the first syllable of the next word.
“For English speakers, the differences between English stress and French prominence are so salient that it ought to be obvious and they ought to readjust their system,” Tremblay said. “Whereas in Korean they think, ‘Oh, this is just like Korean.’ It sounds similar, and they don’t readjust their use of this information.”
Researchers also found that native French speakers who lived in France did better than native French speakers who lived in the U.S. at using French-like intonation cues to locate words in an artificial language. In fact, the longer a native French speaker lived in the U.S., the worse they did at using the cues from their native language.
“This suggests that the speech processing system is extremely adaptive. Despite all the claims about the existence of a critical period for language learning, the speech processing system is actually very flexible; it might just take a long time to completely override the effects of the native language,” Tremblay said.
The research group continues to collect data and plans to include native Dutch speakers who speak French.