A comparison of video- and audio-mediated listening tests with many-facet Rasch modeling and differential distractor functioning

Research output: Contribution to journalArticle

9 Citations (Scopus)

Abstract

The rise in the affordability of quality video production equipment has resulted in increased interest in video-mediated tests of foreign language listening comprehension. Although research on such tests has continued fairly steadily since the early 1980s, studies have relied on analyses of raw scores, despite the growing prevalence of item response theory in the field of language testing as a whole. The present study addresses this gap by comparing data from identical, counter-balanced multiple-choice listening test forms employing three text types (monologue, conversation, and lecture) administered to 164 university students of English in Japan. Data were analyzed via many-facet Rasch modeling to compare the difficulties of the audio and video formats; to investigate interactions between format and text-type, and format and proficiency level; and to identify specific items biased toward one or the other format. Finally, items displaying such differences were subjected to differential distractor functioning analyses. No interactions between format and text-type, or format and proficiency level, were observed. Four items were discovered displaying format-based differences in difficulty, two of which were found to correspond to possible acting anomalies in the videos. The author argues for further work focusing on item-level interactions with test format.

Original languageEnglish
Pages (from-to)3-20
Number of pages18
JournalLanguage Testing
Volume32
Issue number1
DOIs
Publication statusPublished - 2015 Jan 30

Fingerprint

video
interaction
listening comprehension
foreign language
conversation
Japan
Distractor
Modeling
university
language
Text Type
Interaction
student
Proficiency

Keywords

  • differential distractor functioning
  • language assessment
  • listening assessment
  • many-facet Rasch measurement
  • nonverbal communication
  • video listening test

ASJC Scopus subject areas

  • Linguistics and Language
  • Social Sciences (miscellaneous)
  • Language and Linguistics

Cite this

@article{75dbf954151a474390e1e332728ab360,
title = "A comparison of video- and audio-mediated listening tests with many-facet Rasch modeling and differential distractor functioning",
abstract = "The rise in the affordability of quality video production equipment has resulted in increased interest in video-mediated tests of foreign language listening comprehension. Although research on such tests has continued fairly steadily since the early 1980s, studies have relied on analyses of raw scores, despite the growing prevalence of item response theory in the field of language testing as a whole. The present study addresses this gap by comparing data from identical, counter-balanced multiple-choice listening test forms employing three text types (monologue, conversation, and lecture) administered to 164 university students of English in Japan. Data were analyzed via many-facet Rasch modeling to compare the difficulties of the audio and video formats; to investigate interactions between format and text-type, and format and proficiency level; and to identify specific items biased toward one or the other format. Finally, items displaying such differences were subjected to differential distractor functioning analyses. No interactions between format and text-type, or format and proficiency level, were observed. Four items were discovered displaying format-based differences in difficulty, two of which were found to correspond to possible acting anomalies in the videos. The author argues for further work focusing on item-level interactions with test format.",
keywords = "differential distractor functioning, language assessment, listening assessment, many-facet Rasch measurement, nonverbal communication, video listening test",
author = "Aaron Batty",
year = "2015",
month = "1",
day = "30",
doi = "10.1177/0265532214531254",
language = "English",
volume = "32",
pages = "3--20",
journal = "Language Testing",
issn = "0265-5322",
publisher = "SAGE Publications Ltd",
number = "1",

}

TY - JOUR

T1 - A comparison of video- and audio-mediated listening tests with many-facet Rasch modeling and differential distractor functioning

AU - Batty, Aaron

PY - 2015/1/30

Y1 - 2015/1/30

N2 - The rise in the affordability of quality video production equipment has resulted in increased interest in video-mediated tests of foreign language listening comprehension. Although research on such tests has continued fairly steadily since the early 1980s, studies have relied on analyses of raw scores, despite the growing prevalence of item response theory in the field of language testing as a whole. The present study addresses this gap by comparing data from identical, counter-balanced multiple-choice listening test forms employing three text types (monologue, conversation, and lecture) administered to 164 university students of English in Japan. Data were analyzed via many-facet Rasch modeling to compare the difficulties of the audio and video formats; to investigate interactions between format and text-type, and format and proficiency level; and to identify specific items biased toward one or the other format. Finally, items displaying such differences were subjected to differential distractor functioning analyses. No interactions between format and text-type, or format and proficiency level, were observed. Four items were discovered displaying format-based differences in difficulty, two of which were found to correspond to possible acting anomalies in the videos. The author argues for further work focusing on item-level interactions with test format.

AB - The rise in the affordability of quality video production equipment has resulted in increased interest in video-mediated tests of foreign language listening comprehension. Although research on such tests has continued fairly steadily since the early 1980s, studies have relied on analyses of raw scores, despite the growing prevalence of item response theory in the field of language testing as a whole. The present study addresses this gap by comparing data from identical, counter-balanced multiple-choice listening test forms employing three text types (monologue, conversation, and lecture) administered to 164 university students of English in Japan. Data were analyzed via many-facet Rasch modeling to compare the difficulties of the audio and video formats; to investigate interactions between format and text-type, and format and proficiency level; and to identify specific items biased toward one or the other format. Finally, items displaying such differences were subjected to differential distractor functioning analyses. No interactions between format and text-type, or format and proficiency level, were observed. Four items were discovered displaying format-based differences in difficulty, two of which were found to correspond to possible acting anomalies in the videos. The author argues for further work focusing on item-level interactions with test format.

KW - differential distractor functioning

KW - language assessment

KW - listening assessment

KW - many-facet Rasch measurement

KW - nonverbal communication

KW - video listening test

UR - http://www.scopus.com/inward/record.url?scp=84920094385&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84920094385&partnerID=8YFLogxK

U2 - 10.1177/0265532214531254

DO - 10.1177/0265532214531254

M3 - Article

AN - SCOPUS:84920094385

VL - 32

SP - 3

EP - 20

JO - Language Testing

JF - Language Testing

SN - 0265-5322

IS - 1

ER -