Video: Prof Stephen Doherty's lecture on the use of eye tracking for better understanding and enhancing remote interpreting
Watch the full video recording of Prof Stephen Doherty's lecture. This lecture was delivered on the 23rd of September 2021 as part of the Centre for Translation Studies' Convergence lecture series.
Title of the lecture: Using eye tracking to better understand and enhance visual attention, cognitive load, and interpreting performance in remote interpreting
Continued technological advances have increased the availability of remote interpreting which is increasingly being employed worldwide in a variety of contexts due to its potential to reduce the time and cost. The COVID-19 pandemic has further accelerated its usage and risks presenting a panacea with long-term changes on the provision of interpreting services. However, risks of miscommunication have been shown to be magnified in remote interpreting and empirical research is still developing given the inherent diversity and complexity. As such, we have a relatively limited evidence base available to inform and direct evidence-based policy and best practice, particularly in high-stakes medical and legal settings. This paper reports on two projects that aim to help to address these issues by providing empirical insight into interpreters’ visual attention and cognitive load in remote interpreting.
The first project compared the cognitive load and overt visual attention of interpreters in a simulated investigative interview of high ecological validity, in which 50 professionally accredited interpreters interpreted via audio- or video-link, where consecutive and simultaneous interpreting modes were counterbalanced and randomly assigned. Relative to interpreting performance, the consecutive mode and video medium yielded higher cognitive load. We also found that the consecutive mode yielded significantly less gaze time and therefore less on-screen overt visual attention due to off-screen notetaking. Relative to gaze time, the consecutive mode also resulted in more and longer fixations and shifts of attention. Participants also allocated more overt visual attention to the Interviewer than the Suspect, particularly in the consecutive mode. Furthermore, we found informative correlations between eye tracking measures and interpreting performance.
The second project, which has just commenced, focuses on the temporal aspects of visual attention, cognitive load, and interpreting performance, so that we can better ascertain the optimal parameters for remote interpreting by taking into account individual differences. These data will then feed into a machine-learning based model of interpreting performance which will also be used to validate a software plug-in that provides a real-time indicator of visual attention and cognitive load vis-à-vis interpreting performance.
Finally, I conclude with a discussion of limitations and the contributions of the projects and an outline for future work on this topic of growing importance.
Speaker's short bio: Prof Stephen Doherty is Deputy Head of School in the School of Humanities & Languages at UNSW Sydney, where he leads a significant education and research portfolio. He is psychologist and linguist with the role of Associate Professor in Linguistics, Interpreting, and Translation, and lead of the HAL Language Processing Research Lab. With a focus on the psychology of language and technology, his research investigates human language processing and usage by employing natural language processing techniques and combinations of online and offline methods, mainly eye tracking and psychometrics. His research has been supported by the Australian Research Council, Science Foundation Ireland, the European Commission, the Federal Bureau of Investigation, the National Accreditation Authority for Translators and Interpreters, NSW Health, Enterprise Ireland, and a range of industry collaborations, including Microsoft. and SAP. As a Chief Investigator, he has a career total of $2.1 million in competitive research grants and contracted research.