Skip to Main Content
 

Global Search Box

 
 
 
 

ETD Abstract Container

Abstract Header

Directing Post-Editors’ Attention to Machine Translation Output that Needs Editing through an Enhanced User Interface: Viability and Automatic Application via a Word-level Translation Accuracy Indicator

Gilbert, Devin Robert

Abstract Details

2022, PHD, Kent State University, College of Arts and Sciences / Department of Modern and Classical Language Studies.
Post-editing of machine translation (MT) is a workflow that is being used for an increasing number of text types and domains (Koponen, 2016; Hu, 2020; Zouhar et al., 2021),but the sections of text that post-editors need to fix have become harder to detect due to the increased human-like fluency that neural machine translation (NMT) affords (Comparin & Mendes, 2017; Yamada, 2019). This dissertation seeks to address this problem by developing a word-level machine translation quality estimation (MTQE) system to highlight words in raw MT output that need editing in order to aid post-editors. Subsequently, this MTQE system is tested in a large-scale post-editing experiment to determine if it increases productivity and decreases cognitive effort and error rate. This MTQE system is based on two automatically generated features: word translation entropy, generated from the output of multiple MT systems (a feature that has never been used in MTQE), and word class (based on part-of-speech tags). For the post-editing experiment, a within-subjects design assigns raw MT output to participants under three different conditions. Two experimental conditions consist of MT output that has been enhanced with highlighting surrounding the stretches of text that likely need to be edited. In the first experimental condition, this highlighting is supplied automatically by the MTQE system, and in the second experimental condition, this highlighting is supplied by an experienced translator, indicating what text needs editing. The control condition constitutes MT output without highlighting. Participants post-edit three experimental texts in Trados Studio while time-stamped keystroke logs are gathered (which are later integrated into the CRITT Translation Process Research Database (TPR-DB)), and various measures of temporal, technical, cognitive, perceived effort, and group editing activity are used to assess the efficacy and usefulness of highlighting potential errors in the post-editing user interface.
Michael Carl (Advisor)
Lucas Nunes Vieira (Committee Member)
Isabel Lacruz (Committee Member)
Erik Angelone (Committee Member)
193 p.

Recommended Citations

Citations

  • Gilbert, D. R. (2022). Directing Post-Editors’ Attention to Machine Translation Output that Needs Editing through an Enhanced User Interface: Viability and Automatic Application via a Word-level Translation Accuracy Indicator [Doctoral dissertation, Kent State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=kent1657213218346773

    APA Style (7th edition)

  • Gilbert, Devin. Directing Post-Editors’ Attention to Machine Translation Output that Needs Editing through an Enhanced User Interface: Viability and Automatic Application via a Word-level Translation Accuracy Indicator. 2022. Kent State University, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=kent1657213218346773.

    MLA Style (8th edition)

  • Gilbert, Devin. "Directing Post-Editors’ Attention to Machine Translation Output that Needs Editing through an Enhanced User Interface: Viability and Automatic Application via a Word-level Translation Accuracy Indicator." Doctoral dissertation, Kent State University, 2022. http://rave.ohiolink.edu/etdc/view?acc_num=kent1657213218346773

    Chicago Manual of Style (17th edition)