Determining cloze item difficulty from item and passage characteristics across different learner backgrounds

Jonathan Trace, James Dean Brown, Gerriet Janssen, Liudmila Kozhevnikova

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Cloze tests have been the subject of numerous studies regarding their function and use in both first language and second language contexts (e.g., Jonz & Oller, 1994; Watanabe & Koyama, 2008). From a validity standpoint, one area of investigation has been the extent to which cloze tests measure reading ability beyond the sentence level. Using test data from 50 30-item cloze passages administered to 2,298 Japanese and 5,170 Russian EFL students, this study examined the degree to which linguistic features for cloze passages and items influenced item difficulty. Using a common set of 10 anchor items, all 50 tests were modeled in terms of person ability and item difficulty onto a single scale using many-faceted Rasch measurement (k = 1314). Principle components analysis was then used to categorize 25 linguistic item- and passage-level variables for the 50 cloze tests and their respective items, from which three components for each passage- and item-level variables were identified. These six factors along with item difficulty were then entered into both a hierarchical structural equation model and a linear multiple regression to determine the degree to which difficulty in cloze tests could be explained separately by passage and item features. Comparisons were further made by looking at differences in models by nationality and by proficiency level (e.g., high and low). The analyses revealed noteworthy differences in mean item difficulties and in the variance structures between passage- and item-level features, as well as between different examinee proficiency groups.

Original languageEnglish
Pages (from-to)151-174
Number of pages24
JournalLanguage Testing
Volume34
Issue number2
DOIs
Publication statusPublished - 2017 Apr 1
Externally publishedYes

Fingerprint

linguistics
subject of study
ability
language
structural model
nationality
Cloze Test
regression
human being
Proficiency
Language
Cloze Passages
Group
student
Anchor
Linguistic Features
Multiple Regression
Structural Equation Model
Reading Ability
Person

Keywords

  • Cloze
  • item difficulty
  • many-facet Rasch measurement
  • reading
  • structural equation modeling

ASJC Scopus subject areas

  • Language and Linguistics
  • Social Sciences (miscellaneous)
  • Linguistics and Language

Cite this

Determining cloze item difficulty from item and passage characteristics across different learner backgrounds. / Trace, Jonathan; Brown, James Dean; Janssen, Gerriet; Kozhevnikova, Liudmila.

In: Language Testing, Vol. 34, No. 2, 01.04.2017, p. 151-174.

Research output: Contribution to journalArticle

Trace, Jonathan ; Brown, James Dean ; Janssen, Gerriet ; Kozhevnikova, Liudmila. / Determining cloze item difficulty from item and passage characteristics across different learner backgrounds. In: Language Testing. 2017 ; Vol. 34, No. 2. pp. 151-174.
@article{62b3dfffb7274dc980286b60ebb3d31b,
title = "Determining cloze item difficulty from item and passage characteristics across different learner backgrounds",
abstract = "Cloze tests have been the subject of numerous studies regarding their function and use in both first language and second language contexts (e.g., Jonz & Oller, 1994; Watanabe & Koyama, 2008). From a validity standpoint, one area of investigation has been the extent to which cloze tests measure reading ability beyond the sentence level. Using test data from 50 30-item cloze passages administered to 2,298 Japanese and 5,170 Russian EFL students, this study examined the degree to which linguistic features for cloze passages and items influenced item difficulty. Using a common set of 10 anchor items, all 50 tests were modeled in terms of person ability and item difficulty onto a single scale using many-faceted Rasch measurement (k = 1314). Principle components analysis was then used to categorize 25 linguistic item- and passage-level variables for the 50 cloze tests and their respective items, from which three components for each passage- and item-level variables were identified. These six factors along with item difficulty were then entered into both a hierarchical structural equation model and a linear multiple regression to determine the degree to which difficulty in cloze tests could be explained separately by passage and item features. Comparisons were further made by looking at differences in models by nationality and by proficiency level (e.g., high and low). The analyses revealed noteworthy differences in mean item difficulties and in the variance structures between passage- and item-level features, as well as between different examinee proficiency groups.",
keywords = "Cloze, item difficulty, many-facet Rasch measurement, reading, structural equation modeling",
author = "Jonathan Trace and Brown, {James Dean} and Gerriet Janssen and Liudmila Kozhevnikova",
year = "2017",
month = "4",
day = "1",
doi = "10.1177/0265532215623581",
language = "English",
volume = "34",
pages = "151--174",
journal = "Language Testing",
issn = "0265-5322",
publisher = "SAGE Publications Ltd",
number = "2",

}

TY - JOUR

T1 - Determining cloze item difficulty from item and passage characteristics across different learner backgrounds

AU - Trace, Jonathan

AU - Brown, James Dean

AU - Janssen, Gerriet

AU - Kozhevnikova, Liudmila

PY - 2017/4/1

Y1 - 2017/4/1

N2 - Cloze tests have been the subject of numerous studies regarding their function and use in both first language and second language contexts (e.g., Jonz & Oller, 1994; Watanabe & Koyama, 2008). From a validity standpoint, one area of investigation has been the extent to which cloze tests measure reading ability beyond the sentence level. Using test data from 50 30-item cloze passages administered to 2,298 Japanese and 5,170 Russian EFL students, this study examined the degree to which linguistic features for cloze passages and items influenced item difficulty. Using a common set of 10 anchor items, all 50 tests were modeled in terms of person ability and item difficulty onto a single scale using many-faceted Rasch measurement (k = 1314). Principle components analysis was then used to categorize 25 linguistic item- and passage-level variables for the 50 cloze tests and their respective items, from which three components for each passage- and item-level variables were identified. These six factors along with item difficulty were then entered into both a hierarchical structural equation model and a linear multiple regression to determine the degree to which difficulty in cloze tests could be explained separately by passage and item features. Comparisons were further made by looking at differences in models by nationality and by proficiency level (e.g., high and low). The analyses revealed noteworthy differences in mean item difficulties and in the variance structures between passage- and item-level features, as well as between different examinee proficiency groups.

AB - Cloze tests have been the subject of numerous studies regarding their function and use in both first language and second language contexts (e.g., Jonz & Oller, 1994; Watanabe & Koyama, 2008). From a validity standpoint, one area of investigation has been the extent to which cloze tests measure reading ability beyond the sentence level. Using test data from 50 30-item cloze passages administered to 2,298 Japanese and 5,170 Russian EFL students, this study examined the degree to which linguistic features for cloze passages and items influenced item difficulty. Using a common set of 10 anchor items, all 50 tests were modeled in terms of person ability and item difficulty onto a single scale using many-faceted Rasch measurement (k = 1314). Principle components analysis was then used to categorize 25 linguistic item- and passage-level variables for the 50 cloze tests and their respective items, from which three components for each passage- and item-level variables were identified. These six factors along with item difficulty were then entered into both a hierarchical structural equation model and a linear multiple regression to determine the degree to which difficulty in cloze tests could be explained separately by passage and item features. Comparisons were further made by looking at differences in models by nationality and by proficiency level (e.g., high and low). The analyses revealed noteworthy differences in mean item difficulties and in the variance structures between passage- and item-level features, as well as between different examinee proficiency groups.

KW - Cloze

KW - item difficulty

KW - many-facet Rasch measurement

KW - reading

KW - structural equation modeling

UR - http://www.scopus.com/inward/record.url?scp=85016229012&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85016229012&partnerID=8YFLogxK

U2 - 10.1177/0265532215623581

DO - 10.1177/0265532215623581

M3 - Article

VL - 34

SP - 151

EP - 174

JO - Language Testing

JF - Language Testing

SN - 0265-5322

IS - 2

ER -