Nonverbal and other visual cues are well established as a critical component of human communication. Under most circumstances, visual information is available to aid in the comprehension and interpretation of spoken language. Citing these facts, many L2 assessment researchers have studied video-mediated listening tests through score comparisons with audio tests, by measuring the amount of time spent watching, and by attempting to determine examinee viewing behavior through self-reports. However, the specific visual cues to which examinees attend have heretofore not been measured objectively. The present research employs eye-tracking methodology to determine the amounts of time 12 participants viewed specific visual cues on a six-item, video-mediated L2 listening test. Seventy-two scanpath-overlaid videos of viewing behavior were manually coded for visual cues at 0.10-second intervals. Cued retrospective interviews based on eye-tracking data provided reasons for the observed behaviors. Faces were found to occupy the majority (81.74%) of visual dwell time, with participants largely splitting their time between the speaker’s eyes and mouth. Detected gesture viewing was negligible. The reason given for most viewing behavior was determining characters’ emotional states. These findings suggest that the primary difference between audio- and video-mediated L2 listening tests of conversational content is the absence or presence of facial expressions.
ASJC Scopus subject areas
- Language and Linguistics
- Social Sciences (miscellaneous)
- Linguistics and Language