Dr Tomasz Korybski PhD


About

Areas of specialism

Remote Simultaneous Interpreting, Interpreting and Technologies

University roles and responsibilities

  • Research Fellow

    My qualifications

    2013
    PhD in Applied Linguistics (UWE)
    University of the West of England

    Previous roles

    01 October 2016 - 30 December 2019
    Adjunct Professor
    Institute of Applied Linguistics, University of Warsaw

    Publications

    Muhammad Ahmed Saeed, Eloy Rodriguez Gonzalez, Tomasz Korybski, Elena Davitti, Sabine Braun (2023)Comparing Interface Designs to Improve RSI platforms: Insights from an Experimental Study, In: Proceedings of the International Conference HiT-IT 2023pp. 147-156

    Remote Simultaneous Interpreting (RSI) platforms enable interpreters to provide their services remotely and work from various locations. However , research shows that interpreters perceive interpreting via RSI platforms to be more challenging than on-site interpreting in terms of performance and working conditions [1]. While poor audio quality is a major concern for RSI [2,3], another issue that has been frequently highlighted is the impact of the interpreter's visual environment on various aspects of RSI. However, this aspect has received little attention in research. The study reported in this article investigates how various visual aids and methods of presenting visual information can aid interpreters and improve their user experience (UX). The study used an experimental design and tested 29 professional conference interpreters on different visual interface options, as well as eliciting their work habits, perceptions and working environments. The findings reveal a notable increase in the frequency of RSI since the beginning of the COVID-19 pandemic. Despite this increase, most participants still preferred on-site work. The predominant platform for RSI among the interpreters sampled was Zoom, which has a minimalist interface that contrasts with interpreter preferences for maximalist, information-rich be-spoke RSI interfaces. Overall, the study contributes to supporting the visual needs of interpreters in RSI.

    Eloy Rodríguez González, Muhammad Ahmed Saeed, Tomasz Korybski, Elena Davitti, Sabine Braun (2023)Assessing the impact of automatic speech recognition on remote simultaneous interpreting performance using the NTR Model, In: PROCEEDINGS of the International Workshop on Interpreting Technologies SAY IT AGAIN 2023

    The emergence of Simultaneous Interpreting Delivery Platforms (SIDPs) has opened up new opportunities for interpreters to provide cloud-based remote simultaneous interpreting (RSI) services. Similar to booth-based RSI, which has been shown to be more tiring than conventional simultaneous interpreting and more demanding in terms of information processing and mental modelling [11; 12], cloud-based RSI configurations are perceived as more stressful than conventional simultaneous interpreting and potentially detrimental to interpreting quality [2]. Computer-assisted interpreting (CAI) tools, including automatic speech recognition (ASR) [8], have been advocated as a means to support interpreters during cloud-based RSI assignments, but their effectiveness is under-explored. The study reported in this article experimentally investigated the impact of providing interpreters with access to an ASR-generated live transcript of the source speech while they were interpreting, examining its effect on their performance and overall user experience. As part of the experimental design, 16 professional conference interpreters performed a controlled interpreting test consisting of a warmup speech (not included in the analysis), and four speeches, i.e., two lexically dense speeches and two fast speeches, presented in two different interpreting conditions, i.e., with and without ASR support. This article presents initial quantitative findings from the analysis of the interpreters' performance, which was conducted using the NTR Model [17]. Overall, the findings reveal a reduction in the total number of interpreting errors in the ASR condition. However , this is accompanied by a loss in stylistic quality in the ASR condition.

    Tomasz Korybski, Elena Davitti, Constantin Orasan, Sabine Braun (2022)A Semi-Automated Live Interlingual Communication Workflow Featuring Intralingual Respeaking: Evaluation and Benchmarking, In: N Calzolari, F Bechet, P Blache, K Choukri, C Cieri, T Declerck, S Goggi, H Isahara, B Maegaard, H Mazo, H Odijk, S Piperidis (eds.), LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATIONpp. 4405-4413 European Language Resources Assoc-Elra

    In this paper, we present a semi-automated workflow for live interlingual speech-to-text communication which seeks to reduce the shortcomings of existing ASR systems: a human respeaker works with a speaker-dependent speech recognition software (e.g., Dragon Naturally Speaking) to deliver punctuated same-language output of superior quality than obtained using out-of-the-box automatic speech recognition of the original speech. This is fed into a machine translation engine (the EU's eTranslation) to produce live-caption ready text. We benchmark the quality of the output against the output of best-in-class (human) simultaneous interpreters working with the same source speeches from plenary sessions of the European Parliament. To evaluate the accuracy and facilitate the comparison between the two types of output, we use a tailored annotation approach based on the NTR model (Romero-Fresco and Pochhacker, 2017). We find that the semi-automated workflow combining intralingual respeaking and machine translation is capable of generating outputs that are similar in terms of accuracy and completeness to the outputs produced in the benchmarking workflow, although the small scale of our experiment requires caution in interpreting this result.

    Muhammad Ahmed Saeed, Eloy Rodriguez Gonzalez, Tomasz Grzegorz Korybski, Elena Davitti, Sabine Braun (2022)Connected yet Distant: An Experimental Study into the Visual Needs of the Interpreter in Remote Simultaneous Interpreting, In: 24th HCI International Conference (HCII 2022) Proceedings, Part III Springer

    Remote simultaneous interpreting (RSI) draws on Information and Communication Technologies to facilitate multilingual communication by connecting conference interpreters to in-presence, virtual or hybrid events. Early solutions for RSI involved interpreters working in interpreting booths with ISOstandardised equipment. However, in recent years, cloud-based solutions for RSI have emerged, with innovative Simultaneous Interpreting Delivery Platforms (SIDPs) at their core, enabling RSI delivery from anywhere. SIDPs recreate the interpreter's console and work environment (Braun 2019) as a bespoke software/videoconferencing platform with interpretation-focused features. Although initial evaluations of SIDPs were conducted before the Covid-19 pandemic (e.g., DG SCIC 2019), research on RSI (booth-based and software-based) remains limited. Pre-pandemic research shows that RSI is demanding in terms of information processing and mental modelling (Braun 2007; Moser-Mercer 2005), and suggests that the limited visual input available in RSI constitutes a particular problem (Mouzourakis 2006; Seeber et al. 2019). Besides, initial explorations of the cloud-based solutions suggest that there is room for improving the interfaces of widely used SIDPs (Bujan and Collard 2021; DG SCIC 2019). The experimental project presented in this paper investigates two aspects of SIDPs: the design of the interpreter interface and the integration of supporting technologies. Drawing on concepts and methods from user experience research and human-computer interaction, we explore what visual information is best suited to support the interpreting process and the interpreter-machine interaction, how this information is best presented in the interface, and how automatic speech recognition can be integrated into an RSI platform to aid/augment the interpreter's source-text comprehension.

    TOMASZ GRZEGORZ KORYBSKI, ELENA DAVITTI, Constantin Orăsan, SABINE BRAUN (2022)A Semi-Automated Live Interlingual Communication Workflow Featuring Intralingual Respeaking: Evaluation and Benchmarking