People track when talkers say ‘uh’ to predict what comes next

Spontaneous conversation is riddled with disfluencies such as pauses and ‘uhm’s: on average people produce 6 disfluencies every 100 words. But disfluencies do not occur randomly. Instead, ‘uh’ typically occurs before ‘hard-to-name’ low-frequency words (‘uh… automobile’). Previous experiments led by Hans Rutger Bosker from the Max Planck Institute for Psycholinguistics have shown that people can use disfluencies to predict upcoming low-frequency words. But Bosker and his colleagues went one step further. They tested whether listeners would actively track the occurrence of ‘uh’, even when it appeared in unexpected places.

Click on uh… the igloo

The researchers used eye-tracking, which measures people’s looks towards objects on a screen. Two groups of Dutch participants saw two images on a screen (for instance, a hand and an igloo) and heard both fluent and disfluent instructions. However, one group heard a ‘typical’ talker say ‘uh’ before ‘hard-to-name’ low-frequency words (“Click on uh… the igloo”), while the other group heard an ‘atypical’ talker saying ‘uh’ before ‘easy-to-name’ high-frequency words (“Click on uh… the hand”). Would people in this second group track the unexpected occurrences of ‘uh’ and learn to look at the ‘easy-to-name’ object?

As expected, participants listening to the ‘typical’ talker already looked at the igloo upon hearing the disfluency (‘uh’…; that is well before hearing ‘igloo’). Interestingly, people listening to the ‘atypical’ talker learned to adjust this ‘natural’ prediction. Upon hearing a disfluency (‘uh’…), they learnt to look at the common object, even before hearing the word itself (‘hand’). “We take this as evidence that listeners actively keep track of when and where talkers say ‘uh’ in spoken communication, adjusting what they predict will come next for different talkers,” concludes Bosker.

Speakers with a foreign accent

Would listeners also adjust their expectations with a non-native speaker? In a follow-up experiment, the same sentences were spoken by someone with a heavy Romanian accent. In this experiment, participants did learn to predict uncommon objects from a ‘typical’ non-native talker (saying ‘uh’ before low-frequency words). However, they did not learn to predict high-frequency referents from an ‘atypical’ non-native talker (saying ‘uh’ before high-frequency words) — even though the sentence materials were the same in the native vs. non-native experiment.

Geertje van Bergen, co-author on the paper, explains: “This probably indicates that hearing a few atypical disfluent instructions (e.g., the non-native talker saying ‘uh’ before common words like “hand” and “car”) led listeners to infer that the non-native speaker had difficulty naming even simple words in Dutch. As such, they presumably took the non-native disfluencies to not be predictive of the word to follow — in spite of the clear distributional cues indicating otherwise.” This finding is interesting, as it reveals an interplay between ‘disfluency tracking’ and ‘pragmatic inferencing’: we only track disfluencies if we infer from the talker’s voice that the talker is a ‘reliable’ uhm’er.

A hot topic in psycholinguistics

According to the authors, this is the first evidence of distributional learning in disfluency processing. “We’ve known about disfluencies triggering prediction for more than 10 years now, but we demonstrate that these predictive strategies are malleable. People actively track when particular talkers say ‘uh’ on a moment by moment basis, adjusting their predictions about what will come next,” explains Bosker. Distributional learning has been a hot topic in psycholinguistics the past few years. “We extend this field with evidence for distributional learning of metalinguistic performance cues, namely disfluencies — highlighting the wide scope of distributional learning in language processing.”

Source: Read Full Article