TY - JOUR
T1 - Going online
T2 - The effect of mode of delivery on performances and perceptions on an English L2 writing test suite
AU - Brunfaut, Tineke
AU - Harding, Luke
AU - Batty, Aaron Olaf
N1 - Funding Information:
This paper reports on a research project funded by, and carried out under Trinity College London’s funded research programme . Any opinions, findings, conclusions, or recommendations expressed are those held by the authors and do not necessarily reflect the views of Trinity, its examiners, service providers, or registered examination centres. Tineke Brunfaut is a senior lecturer in the Department of Linguistics and English Language at Lancaster University (UK). Her main research interests are in language testing, and reading and listening in a second/foreign language. She is a recipient of the ILTA Best Article Award and TOEFL Outstanding Young Scholar Award . Luke Harding is a senior lecturer in the Department of Linguistics and English Language at Lancaster University (UK). His research interests are in language testing, particularly listening assessment, pronunciation and intelligibility, assessor decision-making, and language assessment literacy. He is a recipient of the ILTA Best Article Award . Aaron Batty is associate professor at Keio University, Japan. His main research interests are in language testing, in particular listening assessment, item response theory and Rasch models. His work has been published in journals such as Language Testing and TESOL Quarterly .
Publisher Copyright:
© 2018 The Authors
PY - 2018/4
Y1 - 2018/4
N2 - In response to changing stakeholder needs, large-scale language test providers have increasingly considered the feasibility of delivering paper-based examinations online. Evidence is required, however, to determine whether online delivery of writing tests results in changes to writing performance reflected in differential test scores across delivery modes, and whether test-takers hold favourable perceptions of online delivery. The current study aimed to determine the effect of delivery mode on the two writing tasks (reading-into-writing and extended writing) within the Trinity College London Integrated Skills in English (ISE) test suite across three proficiency levels (CEFR B1-C1). 283 test-takers (107 at ISE I/B1, 109 at ISE II/B2, and 67 at ISE III/C1) completed both writing tasks in paper-based and online mode. Test-takers also completed a questionnaire to gauge perceptions of the impact, usability and fairness of the delivery modes. Many-facet Rasch measurement (MFRM) analysis of scores revealed that delivery mode had no discernible effect, apart from the reading-into-writing task at ISE I, where the paper-based mode was slightly easier. Test-takers generally held more positive perceptions of the online delivery mode, although technical problems were reported. Findings are discussed with reference to the need for further research into interactions between delivery mode, task and level.
AB - In response to changing stakeholder needs, large-scale language test providers have increasingly considered the feasibility of delivering paper-based examinations online. Evidence is required, however, to determine whether online delivery of writing tests results in changes to writing performance reflected in differential test scores across delivery modes, and whether test-takers hold favourable perceptions of online delivery. The current study aimed to determine the effect of delivery mode on the two writing tasks (reading-into-writing and extended writing) within the Trinity College London Integrated Skills in English (ISE) test suite across three proficiency levels (CEFR B1-C1). 283 test-takers (107 at ISE I/B1, 109 at ISE II/B2, and 67 at ISE III/C1) completed both writing tasks in paper-based and online mode. Test-takers also completed a questionnaire to gauge perceptions of the impact, usability and fairness of the delivery modes. Many-facet Rasch measurement (MFRM) analysis of scores revealed that delivery mode had no discernible effect, apart from the reading-into-writing task at ISE I, where the paper-based mode was slightly easier. Test-takers generally held more positive perceptions of the online delivery mode, although technical problems were reported. Findings are discussed with reference to the need for further research into interactions between delivery mode, task and level.
KW - Computer-based testing of writing
KW - Mode of delivery
KW - Online testing of writing
KW - Paper-based testing of writing
KW - Perceptions
KW - Second language writing assessment
UR - http://www.scopus.com/inward/record.url?scp=85042566575&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85042566575&partnerID=8YFLogxK
U2 - 10.1016/j.asw.2018.02.003
DO - 10.1016/j.asw.2018.02.003
M3 - Article
AN - SCOPUS:85042566575
SN - 1075-2935
VL - 36
SP - 3
EP - 18
JO - Assessing Writing
JF - Assessing Writing
ER -