Skip to Main Content
 

Global Search Box

 
 
 
 

ETD Abstract Container

Abstract Header

A comparative study of the effects of a computerized English oral proficiency test format and a conventional SPEAK test format

Abstract Details

2006, Doctor of Philosophy, Ohio State University, Teaching and Learning.
Despite the increasing use of computer technology in language testing, limited research literature is available about the validity, reliability, and nature of computerized spoken language tests. To date, only mixed results about interactions between test taker characteristics and computerized language test format have been reported. To add to the body of research on this topic, this study explored the relationship between test taker characteristics and test delivery format during spoken English proficiency assessments. A total of 210 international students whose native language was not English were recruited at a U.S. university in autumn 2005. The main data sources included the results of a computerized spoken English test, an audio-taped SPEAK test, and replies to a questionnaire. For data analysis, this study utilized a 2×2×2 mixed factorial research design with random assignment. This study found that an interaction among all three independent variables (i.e., self-reported years of English study, self-reported computer use, and test delivery format) was not significant. Self-reported years of English language study and test delivery format, however, cooperatively produced a significant influence on test scores for the spoken English test. Specifically, the computerized speaking test, not the audio-taped SPEAK test, seemed to affect test results more for the group that self-reported less English study than for the group that self-reported greater English study. In addition, self-reported computer use did not significantly affect test results during oral proficiency assessments. Although this study was limited in terms of a single research site, a single test, and self-reported data, the study has corroborated previous research that emphasized the appropriate use of different test delivery formats according to the purposes of the tests and the characteristics of test takers. Thus, this study called for further study to ensure that a test functions fairly across various types of test takers regardless of their backgrounds. Also, this study suggested sharing ownership of testing among test makers, test takers, and test users, which might allow all interested parties to receive the benefits of testing. Finally, the findings will be useful to understand both the benefits and disadvantages of using technology in language testing.
Charles Hancock (Advisor)
182 p.

Recommended Citations

Citations

  • Yu, E. (2006). A comparative study of the effects of a computerized English oral proficiency test format and a conventional SPEAK test format [Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu1164601340

    APA Style (7th edition)

  • Yu, Eunjyu. A comparative study of the effects of a computerized English oral proficiency test format and a conventional SPEAK test format. 2006. Ohio State University, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=osu1164601340.

    MLA Style (8th edition)

  • Yu, Eunjyu. "A comparative study of the effects of a computerized English oral proficiency test format and a conventional SPEAK test format." Doctoral dissertation, Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=osu1164601340

    Chicago Manual of Style (17th edition)