
Professor Sabine Braun
Academic and research departments
Surrey Institute for People-Centred Artificial Intelligence, Centre for Translation Studies, School of Literature and Languages, Faculty of Arts and Social Sciences.About
Biography
I am a Professor of Translation Studies, Director of the Centre for Translation Studies, and Co-Director (FASS) of the Surrey Institute for People-Centred Artificial Intelligence at the University of Surrey in the UK. From 2017 to 2021 I also served as Associate Dean for Research and Innovation in the Faculty of Arts and Social Sciences at the University of Surrey.
My research explores the integration and interaction of human and machine in translation and interpreting, for example to improve access to critical information, media content and vital public services such as healthcare and justice for linguistic-minority populations and other groups/people in need of communication support. My overarching interest lies in the notions of fairness, trust, transparency, and quality in relation to technology use in these contexts.
For over 10 years, I have led a programme of research that has involved cross-disciplinary collaboration with academic and non-academic partners to improve access to justice for linguistically diverse populations. Under this programme, I have investigated the use of video links in legal proceedings involving linguistic-minority participants and interpreters from a variety of theoretical and methodological perspectives. I have led several multi-national research projects in this field (AVIDICUS 1-3, 2008-16) while contributing my expertise in video interpreting to other projects in the justice sector (e.g. QUALITAS, 2012-14, Understanding Justice, 2013-16, VEJ Evaluation, 2018-20). I have advised the European Council Working Party on e-Law (e-Justice) and other justice-sector institutions in the UK and internationally on video interpreting in legal proceedings and have developed guidelines which have been reflected in European Council Recommendation 2015/C 250/01 on ‘Promoting the use of and sharing of best practices on cross-border videoconferencing’.
In other projects I have explored the use of videoconferencing and virtual reality to train users of interpreting services in how to communicate effectively through an interpreter IVY, 2011-3; EVIVA, 2014-15, SHIFT, 2015-18).
A further example of my work on accessibility is my research on audio description (video description) for visually impaired people. In the H2020 project MeMAD (2018-21) I have recently investigated the feasibility of (semi-)automating AD to improve access to media content that is not normally covered by human AD (e.g. social media content).
In 2019, the Research Centre I lead was awarded an ‘Expanding Excellence in England (E3)' grant (2019-24) by Research England to expand our research on human-machine integration in translation and interpreting. As part of this, I am currently leading and involved in a number of pilot studies aimed at better human-machine integration in different modalities of translation and interpreting.
The insights from my research have informed my teaching in interpreting and audiovisual translation on CTS’s MA programmes and the professional training courses that I have delivered (e.g. for the Metropolitan Police Service in London).
From 2018-2021 I was a member of the DIN Working Group on Interpreting Services and Technologies and co-authored the first standard on remote consecutive interpreting worldwide (DIN 8578). I am a member of the BSI Sub-committee Terminology. From 2018-2022, I was the series editor of the IATIS Yearbook (Routledge) and am currently associate series editor for interpreting of Elements in Translation and Interpreting (CUP) and a member of the Advisory Board of Interpreting (Benjamins). I was appointed to the sub-panel for Modern Languages and Linguistics for the Research Excellence Framework REF 2021.
Areas of specialism
University roles and responsibilities
- Director of the Centre for Translation Studies
- Associate Dean (Research & Innovation), 2017-21
- Co-Director, Surrey Institute of People-Centred Artificial Intelligence
My qualifications
ResearchResearch interests
- Interaction and integration of human and machine in translation, interpreting, audiovisual translation, audio description, especially issues of quality, effectiveness, fairness, trust, transparency, responsible uses of technology
- Use of technologies to deliver interpreting services: video-mediated interpreting, remote simultaneous interpreting and other modalities of ‘distance interpreting’
- Integration of AI and other data-driven technologies in interpreting processes and workflows
- Integration of technologies in intersemiotic and audiovisual translation, especially in audio description/video captioning
- Virtual reality
- Technologies in interpreter education
- Education of interpreter clients
Research projects
Expanding Excellence in England (E3): Human-Machine Integration in Translation and InterpretingResearch England, 2019-24, £3.89M. PI.
Interpret X: Improving uptake, experience and implementation of interpreting services in primary care: a mixed methods study with South Asian communities in EnglandNational Institute for Health and Care Research, 2022-24, £565k. Co-I
EU-WEBPSI: Developing an EU WEB portal for Webcam Public Service Interpreting to improve access to basic services for third-country nationalsEuropean Asylum, Migration and Integration Fund, 2022-2025, £1.39M. Partner and local PI.
MHealth4All: Development and implementation of a digital platform for the promotion of access to mental healthcare for low language proficient third-country nationals in EuropeEuropean Asylum, Migration and Integration Fund, 2022-2024, £1.31M. Parnter and local PI.
MeMAD Methods for Managing Audiovisual DataEC Horizon 2020, 2018-21, £3.43M. Partner and local PI.
Video-Enabled JusticeOffice of the Sussex Police and Crime Commissioner, 2018-2020, £258k. Co-I.
SHIFT: Shaping Interpreters for the FutureEuropean Commission, 2015-18. £40k. Partner and local PI.
AVIDICUS 3: Assessment of Videoconference Interpreting in the Criminal Justice ServicesEuropean Commission, 2014-16, £267k. Project Lead
Understanding JusticeEuropean Commission, 2014-16, £8k. Partner and local PI
eVIVA – Evaluating the Education of Interpreters and their Clients through Virtual ActivitiesEuropean Commission, 2013-15, £341k. Project Lead.
QUALITAS – Ensuring Legal Interpreter Quality through Testing and CertificationEuropean Commission, 2012-14, £19k, Project partner and local PI
Interpreters in CourtInns of Court College of Advocacy/Legal Education Foundation, 2012-14. £5k. Consultant
Videoconferencing in EU Cross-Border Resettlement.London Probation/European Commission, 2012, £17k. Consultant
AVIDICUS 2: Assessment of Videoconference Interpreting in the Criminal Justice ServicesEuropean Commission, 2011-13, £305k. Project Lead
IVY– Interpreting in Virtual RealityEuropean Commission, 2011-13, £622k. Project Lead
AVIDICUS – Assessment of Videoconference Interpreting in the Criminal Justice ServicesEuropean Commission, 20118-11, £240k. Project Lead
Research interests
- Interaction and integration of human and machine in translation, interpreting, audiovisual translation, audio description, especially issues of quality, effectiveness, fairness, trust, transparency, responsible uses of technology
- Use of technologies to deliver interpreting services: video-mediated interpreting, remote simultaneous interpreting and other modalities of ‘distance interpreting’
- Integration of AI and other data-driven technologies in interpreting processes and workflows
- Integration of technologies in intersemiotic and audiovisual translation, especially in audio description/video captioning
- Virtual reality
- Technologies in interpreter education
- Education of interpreter clients
Research projects
Research England, 2019-24, £3.89M. PI.
National Institute for Health and Care Research, 2022-24, £565k. Co-I
European Asylum, Migration and Integration Fund, 2022-2025, £1.39M. Partner and local PI.
European Asylum, Migration and Integration Fund, 2022-2024, £1.31M. Parnter and local PI.
EC Horizon 2020, 2018-21, £3.43M. Partner and local PI.
Office of the Sussex Police and Crime Commissioner, 2018-2020, £258k. Co-I.
European Commission, 2015-18. £40k. Partner and local PI.
European Commission, 2014-16, £267k. Project Lead
European Commission, 2014-16, £8k. Partner and local PI
European Commission, 2013-15, £341k. Project Lead.
European Commission, 2012-14, £19k, Project partner and local PI
Inns of Court College of Advocacy/Legal Education Foundation, 2012-14. £5k. Consultant
London Probation/European Commission, 2012, £17k. Consultant
European Commission, 2011-13, £305k. Project Lead
European Commission, 2011-13, £622k. Project Lead
European Commission, 20118-11, £240k. Project Lead
Supervision
Postgraduate research supervision
I am interested in supervising PhD projects in the following areas:
- Interaction and integration of human and machine in translation and interpreting, especially issues of quality, effectiveness, fairness, trust, responsible uses of technology
- Use of technologies to deliver interpreting services: video-mediated interpreting, remote simultaneous interpreting and other modalities of ‘distance interpreting’
- Integration of AI and other data-driven technologies in interpreting processes and workflows
- Integration of technologies in intersemiotic and audiovisual translation, especially in audio description/video captioning
- Discursive and cognitive-pragmatic foundations of translation and interpreting
- Empirical investigations of interpreting, all modes
- Empirical investigations of audiovisual and intersemiotic translation, all modalities
- Technologies in interpreter education
- Education of interpreter clients
Postgraduate research supervision
Current PhD students
- Carloni, Arianna. Audio describing dance.
- Davis, Olga. Modular audio description.
- Deleanu, Andreea. Accessible Audio Cues to aid the understanding of audiovisual narrative.
- Fritella, Francesca. A research-based blueprint for computer-assisted interpreting training.
- Rodriguez, Eloy. Remote Simultaneous Interpreting and automatic speech recognition.
- Saeed, Ahmed Muhammad. Exploring the visual interface in Remote Simultaneous Interpreting.
- Singureanu, Diana. Emotional intelligence in video-mediated court interpreting.
- Vickers, Charlie. Resolving Puzzles: Reducing cognitive dissonance in ‘puzzle films’ for visually impaired audiences through adapted audio description.
- Tang, Wangyi. Assessing the impact of using automatic speech recognition technology in interpreter-mediated legal proceedings.
- Bouchrara, Cheima. Closing statements in court proceedings.
Co-supervisor:
- Madell, Soumely. Multilingual communication in maternity
- Radić, Željko. Integrating speech recognition technology and human subtitling skills for the translation of interlingual subtitles
Completed PhD projects
- Zhang, Angela Wei (2023). Remote medical interpreting.
- Singureanu, Diana (2022). The role of emotional intelligence in the management of the different demands of video mediated interpreting.
- Gabrych, Marta (2019). Quality assessment of interpreting in Polish-English police-suspect interviews. A multi-method study.
- Delfani, Jaleh (2019). The translation of extralinguistic cultural references in animated feature films.
- Starr, Kim (2018). Audio description and cognitive diversity: a bespoke approach to facilitating access to the emotional content in multimodal narrative texts for autistic audiences.
- Ninrat, Rangsima (2018). The translation of allusion in crime fiction novels from English into Thai between 1960 and 2015.
- Merakchi, Khadidja (2018). The translation of metaphors in popular science from English into Arabic in the domain of astronomy and astrophysics.
- Al-Jabri, Hanan (2017). TV simultaneous interpreting of Arabic presidential speeches into English during the Arab spring.
- Daniel Wilson (2017). An investigation into the comprehensive development of l2 pragmatic competence in the EFL classroom.
- Perdikaki, Katerina (2016). Film adaptation as translation: examining film adaptation as a recontextualised act of communication.
- Gough, Joanna (2016). The patterns of interaction between professional translators and online resources.
- Dicerto, Sara (2015). Multimodal pragmatics: building a new model for source text analysis. Building a new model for source text analysis.
- Bale, Richard (2014). Spoken corpus-based resources for undergraduate initial interpreter training and lexical knowledge acquisition: empirical case studies.
- McGonigle, Frances (2013). Audio description and semiotics: The translation of films for visually-impaired audiences.
- Unal, Melis (2013). Coherence in consecutive interpreting: a comparative study of short and long consecutive interpretations of English texts into Turkish.
- Yeung, Ho Man (Oscar) (2012). An applied genre analysis of the discursive practices in insurance contexts.
- De Leo, Davide (2011). The translation of judgments in different and similar legal systems, languages and language varieties.
Teaching
I have taught a range of different modules including Interpreting Studies, Public Service Interpreting, Interpreting Technologies and Audiovisual Translation at postgraduate level. I am supervising a range of PhD projects in the areas of Interpreting, especially Distance Interpeting and Interpreting Technologies, as well as Audiovisual Translation, especially Audio Description and Media Accessibility
Publications
International migration has increased rapidly over the past 20 years, with an estimated 281 million people living outside their country of birth. Similarly, migration to the UK has continued to rise over this period; current annual migration is estimated to be over 700,000 per year (net migration of over 300,000). With migration comes linguistic diversity, and in healthcare, this often translates into linguistic discordance between patients and healthcare professionals. This can result in communication difficulties that lead to lower quality of care and poor outcomes. COVID-19 has heightened inequalities in relation to language: communication barriers, defined as barriers in understanding or accessing key information on healthcare and challenges in reporting on health conditions, are known to have compounded risks for migrants in the context of COVID-19. Digitalisation of healthcare has further amplified inequalities in primary care for migrant groups.
Video Remote Interpreting (VRI) is a modality of interpreting where the interpreter interacts with the other parties-at-talk through an audiovisual link without sharing the same physical interactional space. In dialogue settings, existing research on VRI has mostly drawn on the analysis of verbal behaviour to explore the complex dynamics of these ‘triadic’ exchanges. However, understanding the complexity of VRI requires a more holistic analysis of its dynamics in different contexts as a situated, embodied activity where resources other than talk (such as gaze, gestures, head and body movement) play a central role in the co-construction of the communicative event. This paper draws on extracts from a corpus of VRI encounters in collaborative contexts (e.g. nurse-patient interaction, customer services) to investigate how specific interactional phenomena which have been explored in traditional settings of dialogue interpreting (e.g. turn management, dyadic sequences, spatial management) unfold in VRI. In addition, the paper will identify the coping strategies implemented by interpreters to deal with various challenges. This fine-grained, microanalytical look at the data will complement the findings provided by research on VRI in legal/adversarial contexts and provide solid grounds to evaluate the impact of different moves. Its systematic integration into training will lead to a more holistic approach to VRI education.
Linguistically and culturally competent human interpreters play a crucial role in facilitating language-discordant interpersonal healthcare communication. Traditionally, interpreters work alongside patients and healthcare providers to provide in-person interpreting services. However, problems with access to professional interpreters, including time pressure and a lack of local availability of interpreters, have led to an exploration and implementation of alternative approaches to providing language support. They include the use of communication technologies to access professional interpreters and volunteers but also the application of various language and translation technologies. This chapter offers a critical review of four different approaches, all of which are conceptualised as different types of human-machine interaction: technology-mediated interpreting, crowdsourcing of volunteer language mediators via digital platforms, machine translation, and the use of translation apps populated with pre-translated phrases and sentences. Each approach will be considered in a separate section, beginning with a review of the relevant scholarly literature and main practical developments, followed by a discussion of critical issues and challenges arising. The focus is on dialogic communication and interaction. Technology-assisted methods of translating written texts are not included.
Audio description (AD) has established itself as a media accessibility service but its reliance on the specialised skills of audio describers poses challenges to broadening the service in response to changing legislation and exponential growth of audiovisual content across different media and platforms. At the same time, research on automating the description of images and video scenes has shown initial successes owing to advances in computer vision and machine learning. Although the machine's ability to capture and coherently describe the nuances and sequencing characteristic of audiovisual narratives is currently limited, the developments in computer vision have raised the question of whether automated or semi-automated methods of describing audiovisual content can be used to produce AD without compromising quality. This chapter analyses the state of the art and challenges of machine-generated image and video description and examines current approaches to advancing this field. It then reports on early practical initiatives and outlines future directions in this area. The focus is on complementarity and additionality, such as the use of automated methods to increase the availability of meaningful AD and the use of human knowledge about AD to advance such methods, as opposed to focussing on attempts to replace the human effort.
We report on a study evaluating the educational opportunities that highly multimodal and interactive Virtual Learning Environments (VLE) provide for collaborative learning in the context of interpreter education. The study was prompted by previous research into the use of VLEs in interpreter education, which showed positive results but which focused on preparatory or ancillary activities and/or individual interpreting practice. The study reported here, which was part of a larger project on evaluating the use of VLEs in educating interpreters and their potential clients, explored the affordances of a videoconferencing platform and a 3D virtual world for collaborative learning in the context of dialogue interpreting. The participants were 13 student-interpreters, who conducted role-play simulations in both environments. Through a mix of methods such as non-participant observation, reflective group discussions, linguistic analysis of the recorded simulations, and a user experience survey several dimensions of using the VLEs were explored including the linguistic/discursive dimension (interpreting), the interactional dimension (communication management between the participants), the ergonomic dimension (human-computer interaction) and the psychological dimension (user experience, sense of presence). Both VLEs were found to be capable of supporting situated and autonomous learning in the interpreting context, although differences arose regarding the reported user experience.
International migration has increased rapidly over the past 20 years, with an estimated 281 million people living outside their country of birth.1 Similarly, migration to the UK has continued to rise over this period; current annual migration is estimated to be >700 000 per year (net migration of >300 000).2 With migration comes linguistic diversity, and in health care this often translates into linguistic discordance between patients and healthcare professionals. This can result in communication difficulties that lead to lower quality of care and poor outcomes.3 COVID-19 has heightened inequalities in relation to language: communication barriers, defined as barriers in understanding or accessing key information on health care and challenges in reporting on health conditions, are known to have compounded risks for migrants in the context of COVID-19.4 Digitalisation of health care has further amplified inequalities in primary care for migrant groups.5
Human beings find the process of narrative sequencing in written texts and moving imagery a relatively simple task. Key to the success of this activity is establishing coherence by using critical cues to identify key characters, objects, actions and locations as they contribute to plot development.
This paper examines first steps in identifying and compiling human-generated corpora for the purpose of determining the quality of computer-generated video descriptions. This is part of a study whose general ambition is to broaden the reach of accessible audiovisual content through semi-automation of its description for the benefit of both end-users (content consumers) and industry professionals (content creators). Working in parallel with machine-derived video and image description datasets created for the purposes of advancing computer vision research, such as Microsoft COCO (Lin et al., 2015) and TGIF (Li et al., 2016), we examine the usefulness of audio descriptive texts as a direct comparator. Cognisant of the limitations of this approach, we also explore alternative human-generated video description datasets including bespoke content description. Our research forms part of the MeMAD (Methods for Managing Audiovisual Data) project, funded by the EU Horizon 2020 programme.
"This state-of-the-art volume covers recent developments in research on Audio Description, the professional practice dedicated to making audiovisual products, artistic artefacts and performances accessible to those with supplementary visual and cognitive needs. This book is key reading for researchers, advanced students and practitioners of audiovisual translation, media, film and performance studies, as well as those in related fields including cognition, narratology, computer vision and artificial intelligence"--
This paper reports on a user-experience study undertaken as part of the H2020 project MeMAD (‘Methods for Managing Audiovisual Data: Combining Automatic Efficiency with Human Accuracy’), in which multimedia content describers from the television and archive industries tested Flow, an online platform, designed to assist the post-editing of automatically generated data, in order to enhance the production of archival descriptions of film content. Our study captured the participant experience using screen recordings, the User Experience Questionnaire (UEQ), a benchmarked interactive media questionnaire and focus group discussions, reporting a broadly positive post-editing environment. Users designated the platform’s role in the collation of machine-generated content descriptions, transcripts, named-entities (location, persons, organisations) and translated text as helpful and likely to enhance creative outputs in the longer term. Suggestions for improving the platform included the addition of specialist vocabulary functionality, shot-type detection, film-topic labelling, and automatic music recognition. The limitations of the study are, most notably, the current level of accuracy achieved in computer vision outputs (i.e. automated video descriptions of film material) which has been hindered by the lack of reliable and accurate training data, and the need for a more narratively oriented interface which allows describers to develop their storytelling techniques and build descriptions which fit within a platform-hosted storyboarding functionality. While this work has value in its own right, it can also be regarded as paving the way for the future (semi)automation of audio descriptions to assist audiences experiencing sight impairment, cognitive accessibility difficulties or for whom ‘visionless’ multimedia consumption is their preferred option.
Remote simultaneous interpreting (RSI) draws on Information and Communication Technologies to facilitate multilingual communication by connecting conference interpreters to in-presence, virtual or hybrid events. Early solutions for RSI involved interpreters working in interpreting booths with ISOstandardised equipment. However, in recent years, cloud-based solutions for RSI have emerged, with innovative Simultaneous Interpreting Delivery Platforms (SIDPs) at their core, enabling RSI delivery from anywhere. SIDPs recreate the interpreter's console and work environment (Braun 2019) as a bespoke software/videoconferencing platform with interpretation-focused features. Although initial evaluations of SIDPs were conducted before the Covid-19 pandemic (e.g., DG SCIC 2019), research on RSI (booth-based and software-based) remains limited. Pre-pandemic research shows that RSI is demanding in terms of information processing and mental modelling (Braun 2007; Moser-Mercer 2005), and suggests that the limited visual input available in RSI constitutes a particular problem (Mouzourakis 2006; Seeber et al. 2019). Besides, initial explorations of the cloud-based solutions suggest that there is room for improving the interfaces of widely used SIDPs (Bujan and Collard 2021; DG SCIC 2019). The experimental project presented in this paper investigates two aspects of SIDPs: the design of the interpreter interface and the integration of supporting technologies. Drawing on concepts and methods from user experience research and human-computer interaction, we explore what visual information is best suited to support the interpreting process and the interpreter-machine interaction, how this information is best presented in the interface, and how automatic speech recognition can be integrated into an RSI platform to aid/augment the interpreter's source-text comprehension.
This paper reports on an empirical case study conducted to investigate the overall conditions and challenges of integrating corpus materials and corpus-based learning activities into Englishlanguage classes at a secondary school in Germany. Starting from the observation that in spite of the large amount of research into corpus-based language learning, hands-on work with corpora has remained an exception in secondary schools, the paper starts by outlining a set of pedagogical requirements for corpus integration and the approach which has formed the basis for designing the case study. Then the findings of the study are reported and discussed. As a result of the methodological challenges identified in the study, the author argues for a move from 'data-driven learning' to needs-driven corpora, corpus activities and corpus methodologies.
This paper explores data from video-mediated remote interpreting (RI) which was originally generated with the aim of investigating and comparing the quality of the interpreting performance in onsite and remote interpreting in legal contexts. One unexpected finding of this comparison was that additions and expansions were significantly more frequent in RI, and that their frequency increased further after a phase of familiarisation and training for the participating interpreters, calling for a qualitative exploration of the motives and functions of the additions and expansions. This exploration requires an appropriate methodology. Whilst introspective data give insights into interpreting processes and the motivations guiding the interpreter’s choices, they tend to be unsystematic and incomplete. Micro-analytical approaches such as Conversation Analysis are a promising alternative, especially when enriched with social macro-variables. In line with this, the present paper has a dual aim. The primary aim is to explore the nature of additions and expansions in RI, examining especially to what extent they are indicative of interpreting problems, to what degree they are specific to the videoconference situation, what they reveal about, and how they affect the interpreter’s participation in RI. The secondary aim is to evaluate the micro-analytical approach chosen for this exploration.
In diesem Beitrag geht es um Möglichkeiten der Nut¬zung von Korpora im Sekundarschulbereich. Nach einem Überblick über einschlägige Korpusressourcen, Analy¬severfahren und Tools werden in knappen Zügen die Grundlagen der Korpusnutzung im Sprachlernkontext skizziert und anschließ end verschiedene Möglichkeiten für die Nutzung von Korpora gesprochener und ge¬schriebener Sprache illustriert.
The term ‘remote interpreting’ (RI) refers to the use of communication TECHNOLOGY for gaining access to an interpreter who is in another room, building, city or country and who is linked to the primary participants by telephone or videoconference. RI by telephone is nowadays often called TELEPHONE INTERPRETING or over-the-phone interpreting. RI by videoconference is often simply called remote interpreting when it refers to spoken-language interpreting. In SIGNED LANGUAGE INTERPRETING, the term VIDEO REMOTE INTERPRETING has become established. RI is best described as a modality or method of delivery. It has been used for SIMULTANEOUS INTERPRETING, CONSECUTIVE INTERPRETING and DIALOGUE INTERPRETING. This entry focuses on RI by videoconference in spoken-language interpreting.
Translating and interpreting for society and the institutions means meeting the new language needs characterising everyday life. As a result of growing mobility and constantly increasing migration flows, often institutions are required to communicate with people who speak languages of lesser diffusion in Europe's multicultural and multilingual context. These needs are anche felt in the legal sector. The articles included in this volume show clearly That meeting language needs in the legal sector means guaranteeing citizens' rights and strengthening democracy in our societies. [Source: Editors]
The use of corpora in the second-language learning context requires the availability of corpora which are pedagogically relevant with regard to choice of discourse, choice of media, annotation and size. I here describe a pedagogically motivated corpus design which supports a direct and efficient exploitation of the corpus by learners and teachers. One of the major guidelines is Widdowson's (2003) claim that the successful use of corpora requires a learner's (and teacher's) ability to 'authenticate' the corpus materials. In line with this, I argue for the development of small and pedagogically annotated corpora which enable us to combine two methods of analysis and exploitation to mutual benefit: a corpus-based approach (i.e. 'vertical reading' of e.g. concordances), which provides patterns of language use, and a discourse-based approach, which focuses on the analysis of the individual texts in the corpus and of linguistic means of expression in relation to their communicative (situational) and cultural embedding. To illustrate my points, I use a small multimedia corpus of spoken English which is currently being developed as a model corpus with pedagogical goals in mind.
Spoken language is often perceived as a deviation from the norm. This chapter highlights some of the characteristic features of ‘spokenness’ and the rationale behind them. Using English as the exemplar case, it then reports the findings of a study that investigated how the perception and acceptance of such features is influenced by the medium and mode in which spoken language is encountered (face-to-face, video, transcript) and how this differs between native speakers and non-native speakers. At the end, the pedagogical implications of the study will be discussed.
The increasing use of videoconferencing technology in legal proceedings has led to different configurations of video-mediated interpreting (VMI). Few studies have explored interpreter perceptions of VMI, each focusing on one country, configuration (e.g. interpreter-assisted video links between courts and remote participants) and setting (e.g. immigration). The study reported here is the first study drawing on multiple data sets, countries, settings and configurations to investigate interpreter perceptions of VMI. It compares perceptions in England with other countries, covering common configurations (e.g. court-prison video links, links to remote interpreters) and settings (e.g. police, court, immigration), and taking into account the sociopolitical context in which VMI has emerged. The aim is to gain systematic insights into the factors shaping the interpreters’ perceptions as a step towards improving VMI.
The potential of corpora for language learning and teaching has been widely acknowledged and their ready availability on the Web has facilitated access for a broad range of users, including language teachers and learners. However, the integration of corpora into general language learning and teaching practice has so far been disappointing. In this paper, I will argue that the shape of many existing corpora, designed with linguistic research goals in mind, clashes with pedagogic requirements for corpus design and use. Hence, a ‘pedagogic mediation of corpora’ is required (cf. Widdowson, 2003). I will also show that the realisation of this requirement touches on both the development of appropriate corpora and the ways in which they are exploited by learners and teachers. I will use a small English Interview Corpus (ELISA) to outline possible solutions for a pedagogic mediation. The major aspect of this is the combination of two approaches to the analysis and exploitation of a pedagogically relevant corpus: a corpus-based and a discourse-based approach.
Remote interpreting, whereby the interpreter is physically separated from those who need the interpretation, has been investigated in relation to conference and healthcare settings. By contrast, very little is known about remote interpreting in legal proceedings, where this method of interpreting is increasingly used to optimise interpreters’ availability. This paper reports the findings of an experimental study investigating the viability of videoconference-based remote interpreting in legal contexts. The study compared the quality of interpreter performance in traditional and remote interpreting, both using the consecutive mode. Two simulated police interviews of detainees, recreating authentic situations, were interpreted by eight interpreters with accreditation and professional experience in police interpreting. The languages involved were French (in most cases the interpreter’s native language) and English. Each interpreter interpreted one of the interviews in remote interpreting, and the other in a traditional face-to-face setting. Various types of problem in the interpretations were analysed, quantitatively and qualitatively. Among the key findings are a significantly higher number of interpreting problems, and a faster decline of interpreting performance over time, in remote interpreting. The paper gives details of these findings, and discusses the potential legal consequences of the problems identified.
In line with the aim of the MuTra conference to address "the multiple (multilingual, multimedia, multimodal and polysemiotic) dimensions of modern translation scenarios" and to raise "questions as to the impact of new technologies on the form, content, structure and modes of translated products" (Gerzymisch-Arbogast: 2007: 7), this paper will investigate the impact of multimedia communication technologies on interpreting. The use of these technologies has led to new forms of interpreting in which interpreting takes place from a distance, aided by technical mediation. After reviewing the major new and emerging forms, I will outline a set of research questions that need to be addressed and, by way of example, discuss the results of research on interpreter adaptation in videoconference interpreting.
As an emerging form of intermodal translation, audio description (AD) raises many new questions for Translation Studies and related disciplines. This paper will investigate the question of how the coherence of a multimodal source text such as a film can be re-created in audio description. Coherence in film characteristically emerges from links within and across different modes of expression (e.g. links between visual images, image-sound links and image-dialogue links). Audio describing a film is therefore not simply a matter of substituting visual images with verbal descriptions. It involves ‘translating’ some of these links into other appropriate types of links. Against this backdrop, this paper aims to examine the means available for the re-creation of coherence in an audio described version of a film, and the problems arising. To this end, the paper will take a fresh look at coherence, outlining a model of coherence which embraces verbal and multimodal texts and which highlights the important role of both source text author (viz. audio describer as translator) and target text recipients in creating coherence. This model will then be applied to a case study focussing on the re-creation of various types of intramodal and intermodal relations in AD.
This paper will focus on the use of spoken corpora in this context. "Applied Corpus Linguistics‟ has produced a growing body of research into the use of corpora in language pedagogy, with most recent work focusing on spoken and multimedia corpora for language teaching. We will argue that interpreter training for business and community settings can benefit immensely from this research and we discuss how these approaches can be adapted to suit the needs of business and community interpreter training. Section 2 provides further background to contextualise the idea and the concept of corpus-based interpreter training. Sections 3 and 4 outline a discourse processing model of interpreting and a range of source text related challenges of interpreting as a framework for developing appropriate annotation categories. Section 5 presents initial ideas for the design of a pedagogical corpus for interpreter training. Section 6 concludes the paper by highlighting how this approach is integrated into the wider context of the IVY project and its aim to support business and community interpreter training.
Inspired by the belief that cognitive and pragmatic models of communication and discourse processing offer great potential for the study of Audiovisual Translation (AVT), this paper will review such models and discuss their contribution to conceptualising the three inter-related sub-processes underlying all forms of AVT: the comprehension of the multimodal discourse by the translator; the translation of selected elements of this discourse; and the comprehension of the newly formed multimodal discourse by the target audience. The focus will be on two models, Relevance Theory, which presents the most comprehensive pragmatic model of communication and Mental Model Theory, which underlies cognitive models of discourse processing. The two approaches will be used to discuss and question common perceptions of AVT as being ‘constrained’ and ‘partial’ translation.
The point of departure of this paper is an immersive (avatar-based) 3D virtual environment which was developed in the European project IVY – Interpreting in Virtual Reality – to simulate interpreting practice. Whilst this environment is the first 3D environment dedicated to interpreter-mediated communication, research in other educational contexts suggests that such environments can foster learning (Kim, Lee and Thomas 2012). The IVY 3D environment offers a range of virtual ‘locations’ (e.g. business meeting room, tourist office, doctor’s surgery) which serve as backdrops for the practice of consecutive and dialogue interpreting in business and public service contexts. The locations are populated with relevant objects and with robot-avatars who act as speakers by presenting recorded monologues and bilingual dialogues. Students, represented by their own avatars, join them to practise interpreting. This paper focuses on the development of the bilingual dialogues, which are at the heart of many interpreter-mediated business and public service encounters but which are notoriously difficult to obtain for educational purposes. Given that interpreter training institutions usually need to offer bilingual resources of comparable difficulty levels in many language combinations, ad-hoc approaches to the creation of such materials are normally ruled out. The approach outlined here was therefore to start from available corpora of spoken language that were designed with pedagogical applications in mind (Braun 2005, Kohn 2012). The paper begins by explaining how the dialogues were created and then discusses the benefits and potential shortcomings of this approach in the context of interpreter education. The main points of discussion concern (1) the level of systematicity and authenticity that can be achieved with this corpus-based approach; (2) the potential of a 3D virtual environment to increase this sense of authenticity and thus to enable students to experience the essence of dialogue interpreting in a simulated environment.
Audio description (AD) has established itself as a media access service for blind and partially sighted people across a range of countries, for different media and types of audiovisual performance (e.g. film, TV, theatre, opera). In countries such as the UK and Spain, legislation has been implemented for the provision of AD on TV, and the European Parliament has requested that AD for digital TV be monitored in projects such as DTV4ALL (www.psp-dtv4all.org) in order to be able to develop adequate European accessibility policies. One of the drawbacks is that in their current form, AD services largely leave the visually impaired community excluded from access to foreign-language audiovisual products when they are subtitled rather than dubbed. To overcome this problem, audio subtitling (AST) has emerged as a solution. This article will characterise audio subtitling as a modality of audiovisual localisation which is positioned at the interface between subtitling, audio description and voice-over. It will argue that audio subtitles need to be delivered in combination with audio description and will analyse, system- atise and exemplify the current practice of audio description with audio subtitling using commercially available DVDs.
With the rise in population migration there has been an increased need for professional interpreters who can bridge language barriers and operate in a variety of fields such as business, legal, social and medical. Interpreters require specialized training to cope with the idiosyncrasies of each eld and their potential clients need to be aware of professional parlance. We present `Project IVY'. In IVY, users can make a selection from over 30 interpreter training scenarios situated in the 3D virtual world. Users then interpret the oral interaction of two avatar actors. In addition to creating di erent 3D scenarios, we have developed an asset management system for the oral les and permit users (mentors of the training interpreters) to easily upload and customize the 3D environment and observe which scenario is being used by a student. In this article we present the design and development of the IVY Virtual Environment and the asset management system. Finally we make discussion over our plans for further development.
The aim of this paper is to introduce a methodological solution for the design and exploitation of a corpus which is dedicated to pedagogical goals. In particular, I will argue for a pedagogically appropriate corpus annotation and query, and for the enrichment of such a corpus with additional materials (including corpus-based tasks and exercises). The solution will be illustrated with the help of ELISA, a small spoken corpus of English containing video interviews with native speakers. However, the methodology is transferable to the creation of pedagogically relevant corpora with other contents and for other languages.
Computer-generated 3D virtual worlds offer a number of affordances that make them attractive and engaging sites for learning, such as providing learners with a sense of presence, opportunities for synchronous and asynchronous interaction (e.g. in the form of voice or text chat, document viewing and sharing), and possibilities for collaborative work. Some of the research into educational uses of 3D virtual environments has engaged with how the learning opportunities they offer can be evaluated and has thus been experimenting with what needs to be evaluated to explore how learning takes place in virtual worlds and what methods can be used for the evaluation. Whilst some studies evaluate the design of the virtual world, its usability and its link to learning tasks (e.g. Chang et al. 2009, Deutschmann et al. 2009, Wiecha et al. 2010), others have sought to find out more about the interaction that takes place within virtual worlds. Peterson (2010), for example, focuses on learner participation patterns and interaction strategies in a language learning context, using qualitative methods including discourse analysis of learner transcripts (of text chat output in the target language) as the main research instrument, complemented by observation, field notes, pre- and post-study questionnaires and interviews. Alternatively, Lorenzo et al. (2012) compare collaborative work on a learning object in a virtual world with the same task in a conventional learning content management system. Other studies have sought to look more specifically at the learning processes that take place in virtual environments and in so doing have started to bring together theoretical frameworks from virtual world education with the psychological or cognitive aspects involved in learning (Henderson et al. 2012, Jarmon et al. (2009). Based on such approaches, especially the mixed methods approach adopted by Jarmon et al., this chapter reports on the pedagogical evaluation of the learning processes of trainee interpreters and clients of interpreting services (i.e. professionals who (may) communicate through interpreters in their everyday working lives) using a bespoke 3D Virtual Learning Environment.
When interpreting takes place in a videoconference setting, the intrinsic technological challenges and the very remoteness of the interpreters’ location compound the complexity of the task. Existing research on remote interpreting and the problems it entails focusses on remote conference interpreting, in which the interpreters are physically separated from the conference site while the primary interlocutors are together on site as usual. In an effort to broaden the scope of research in the area of remote interpreting to include other types and to address other questions, in particular that of the interpreters’ adaptability to new working conditions, this paper analyses small-group videoconferences in which the primary interlocutors as well as the interpreters all work from different locations. The findings from an empirical case study (based on recordings of videoconference sessions as well as introspective data) are used to identify and exemplify different types of interpreter adaptation.
The topic of this paper is Audio Description (AD) for blind and partially sighted people. I will outline a discourse-based approach to AD focussing on the role of mental modelling, local and global coherence, and different types of inferences (explicatures and implicatures). Applying these concepts to AD, I will discuss initial insights and outline questions for empirical research. My main aim is to show that a discourse-based approach to AD can provide an informed framework for research, training and practice.
Because of the scarcity of training opportunities in legal interpreting, and the non-existence of training in video-mediated legal interpreting per se, both from the point of view of the legal interpreters themselves, and that of the legal professionals who work with interpreters, the AVIDICUS Project included as one of its core objectives to devise and pilot three training modules on video-mediated interpreting: one for legal practitioners, including the police; one for interpreters working in the legal services; and one for interpreting students. This chapter presents the three training modules, designed and developed by the AVIDICUS Project. Following a discussion of the background context to the need for training and the technological of such training, the module for student interpreters is presented, followed by the legal interpreters’ module, and finally the module aimed at legal practitioners and police officers.
As international businesses adopt social media and virtual worlds as mediums for conducting international business, so there is an increasing need for interpreters who can bridge the language barriers, and work within these new spheres. The recent rise in migration (within the EU) has also increased the need for professional interpreters in business, legal, medical and other settings. Project IVY attempts to provide bespoke 3D virtual environments that are tailor made to train interpreters to work in the new digital environments, responding to this increased demand. In this paper we present the design and development of the IVY Virtual Environment. We present past and current design strategies, our implementation progress and our future plans for further development. © 2012 IEEE.
This chapter reports the key findings of the European AVIDICUS 3 project,1 which focused on the use of video-mediated interpreting in legal settings across Europe. Whilst judicial and law enforcement authorities have turned to videoconferencing to minimise delays in legal proceedings, reduce costs and improve access to justice, research into the use of video links in legal proceedings has called for caution. Sossin and Yetnikoff (2007), for example, contend that the availability of financial resources for legal proceedings cannot be disentangled from the fairness of judicial decision-making. The Harvard Law School (2009: 1193) warns that, whilst the use of video links may eliminate delays, it may also reduce an individual’s “opportunity to be heard in a meaningful manner”. In proceedings that involve an interpreter, procedural fairness and “the opportunity to be heard in a meaningful manner” are closely linked to the quality of the interpretation. The use of video links in interpreter-mediated proceedings therefore requires a videoconferencing solution that provides optimal support for interpreting as a crucial prerequisite for achieving the ultimate goal, i.e. fairness of justice. Against this backdrop, the main aim of AVIDICUS 3 was to identify institutional processes and practices of implementing and using video links in legal proceedings and to assess them in terms of how they accommodate and support bilingual communication mediated through an interpreter. The focus was on spoken-language interpreting. The project examined 12 European jurisdictions (Belgium, Croatia, England and Wales, Finland, France, Hungary, Italy, the Netherlands, Poland, Scotland, Spain and Sweden). An ethnographic approach was adopted to identify relevant practices, including site visits, in-depth and mostly in-situ interviews with over 100 representatives from different stakeholder groups, observations of real-life proceedings, and the analysis of a number of policy documents produced in the justice sector. The chapter summarises and systematises the findings from the jurisdictions included in this study. The assessment focuses on the use of videoconferencing in both national and cross-border proceedings, and covers different applications of videoconferencing in the legal system, including its use for links between courts and remote participants (e.g. witnesses, defendants in prison) and its use to access interpreters who work offsite (see Braun 2015; Skinner, Napier & Braun in this volume).
This paper reports on an empirical case study conducted to investigate the overall conditions and challenges of integrating corpus materials and corpus-based learning activities into Englishlanguage classes at a secondary school in Germany. Starting from the observation that in spite of the large amount of research into corpus-based language learning, hands-on work with corpora has remained an exception in secondary schools, the paper starts by outlining a set of pedagogical requirements for corpus integration and the approach which has formed the basis for designing the case study. Then the findings of the study are reported and discussed. As a result of the methodological challenges identified in the study, the author argues for a move from ’data-driven learning’ to needs-driven corpora, corpus activities and corpus methodologies.
When interpreting takes place in a videoconference setting, the intrinsic technological challenges and the very remoteness of the interpreters' location compound the complexity of the task. Existing research on remote interpreting and the problems it entails focusses on remote conference interpreting, in which the interpreters are physically separated from the conference site while the primary interlocutors are together on site as usual. In an effort to broaden the scope of research in the area of remote interpreting to include other types and to address other questions, in particular that of the interpreters' adaptability to new working conditions, this paper analyses small-group videoconferences in which the primary interlocutors as well as the interpreters all work from different locations. The findings from an empirical case study (based on recordings of videoconference sessions as well as introspective data) are used to identify and exemplify different types of interpreter adaptation.
Inspired by the belief that cognitive and pragmatic models of communication and discourse processing offer great potential for the study of Audiovisual Translation (AVT), this paper will review such models and discuss their contribution to conceptualising the three inter-related sub-processes underlying all forms of AVT: the comprehension of the multimodal discourse by the translator; the translation of selected elements of this discourse; and the comprehension of the newly formed multimodal discourse by the target audience. The focus will be on two models, Relevance Theory, which presents the most comprehensive pragmatic model of communication and Mental Model Theory, which underlies cognitive models of discourse processing. The two approaches will be used to discuss and question common perceptions of AVT as being ‘constrained’ and ‘partial’ translation.
This paper reports on the conceptual design and development of an avatar-based 3D virtual environment in which trainee interpreters and their potential clients (e.g. students and professionals from the fields of law, business, tourism, medicine) can explore and simulate professional interpreting practice. The focus is on business and community interpreting and hence the short consecutive and liaison interpreting modes. The environment is a product of the European collaborate project IVY (Interpreting in Virtual Reality). The paper begins with a state-of-the-art overview of the current uses of ICT in interpreter training (section 2), with a view to showing how the IVY environment has evolved out of existing knowledge of these uses, before exploring how virtual worlds are already being used for pedagogical purposes in fields related to interpreting (section 3). Section 4 then shows how existing knowledge about learning in virtual worlds has fed into the conceptual design of the IVY environment and introduces that environment, its working modes and customised digital content. This is followed by an analysis of the initial evaluation feedback on the first environment prototype (section 5), a discussion of the main pedagogical implications (section 6) and concluding remarks (section 7). The more technical aspects of the IVY environment are described in Ritsos et al. (2012).
In response to increasing mobility and migration in Europe, the European Directive 2010/64/EU on strengthening the rights to interpretation and translation in criminal proceedings has highlighted the importance of quality in legal translation and interpreting. At the same time, the economic situation is putting pressure on public services and translation/interpreting service providers alike, jeopardizing quality standards and fair access to justice. With regard to interpreting, the use of videoconference technology is now being widely considered as a potential solution for gaining cost-effective and timely access to qualified legal interpreters. However, this gives rise to many questions, including: how technological mediation through videoconferencing affects the quality of interpreting; how this is related to the actual videoconference setting and the distribution of participants; and ultimately whether the different forms of video-mediated interpreting are sufficiently reliable for legal communication. It is against this backdrop that the AVIDICUS Project (2008-11), co-funded by the European Commission’s Directorate-General Justice, set out to research the quality and viability of video-mediated interpreting in criminal proceedings. This volume, which is based on the final AVIDICUS Symposium in 2011, presents a cross-section of the findings from AVIDICUS and complementary research initiatives, as well as recommendations for judicial services, legal practitioners and police officers, and legal interpreters.
The field of sign language interpreting is undergoing an exponential increase in the delivery of services through remote and video technologies. The nature of these technologies challenges established notions of interpreting as a situated, communicative event and of the interpreter as a participant. As a result, new perspectives and research are necessary for interpreters to thrive in this environment. This volume fills that gap and features interdisciplinary explorations of remote interpreting from spoken and signed language interpreting scholars who examine various issues from linguistic, sociological, physiological, and environmental perspectives. Here or There presents cutting edge, empirical research that informs the professional practice of remote interpreting, whether it be video relay service, video conference, or video remote interpreting. The research is augmented by the perspectives of stakeholders and deaf consumers on the quality of the interpreted work. Among the topics covered are professional attitudes and motivations, interpreting in specific contexts, and adaptation strategies. The contributors also address potential implications for relying on remote interpreting, discuss remote interpreter education, and offer recommendations for service providers.
This paper reports on a long-term European project collaboration between academic researchers and non-academic institutions in Europe to investigate the quality and viability of video-mediated interpreting in legal proceedings (AVIDICUS: Assessment of Video-Mediated Interpreting in the Criminal Justice System).
Since the pioneering work of John Sinclair on building and using corpora for researching, describing and teaching language, much thought has been given to corpora in Applied Linguistics (Hunston 2002), how to use corpora in language teaching (Sinclair 2004), teaching and learning by doing corpus analysis (Kettemann / Marko 2002) and similar themes. A look at the titles of recent papers, monographs and edited volumes—which are printed in italics in this introduction—suggests that Applied Corpus Linguistics (Connor / Upton 2004) has established itself as a specific and expanding field of study. It has provided ideas on how to manage the step from corpora to classroom (O’Keeffe et al. 2007) and has produced a growing body of research into the use of corpora in the foreign language classroom (Hidalgo et al. 2007). At face value, the enthusiasm of the research community seems to be increasingly shared by practising teachers. At many teacher training seminars at which I have discussed the use(fulness) of corpus resources, I have met teachers who—at the end of the seminar—were eager to use corpora with their students and were especially interested in the growing number of easily accessible web-based resources. But in spite of everyone’s best intentions, the use of corpora in language classrooms remains the exception, and the question of what it takes to get past ‘Groundhog Day’ in corpus-based language learning and teaching is far from being solved. Spoken corpora may not be the obvious solution. The use of Spoken corpora in Applied Linguistics (Campoy / Luzón 2007) is usually considered to be more challenging than the use of written corpora, since spoken language is often perceived to be ‘messy’, grammatically challenging and lexically poor. Moreover, spoken corpora have traditionally been more difficult to build and distribute. However, multimedia technologies have not only made this easier but they have also opened up new ways of exploiting corpus data. Against this backdrop, this paper will argue that spoken multimedia corpora are not simply an interesting type of corpus for language learning, but that they can in fact lead the way in bringing corpus technology and language pedagogy together (Braun et al. 2006). After a brief review of some of the prevailing obstacles for a more wide-spread use of corpora by students and some common approaches and solutions to the problems at hand (in section 2), one approach to designing a pedagogically viable corpus will be discussed in more detail (in section 3). The approach will then be exemplified (in section 4) using the ELISA corpus, a spoken multimedia corpus of professional English, to illustrate how corpus-based work can be expanded beyond the conventional methods of ‘data-driven learning’. The paper will be concluded with an outlook on some more recent initiatives of spoken corpus development (in section 5). The wider aim of this paper is to stimulate further discussion about, and research into, the development of pedagogically viable corpora, tools and methods which can foster student-centred corpus use in language learning and other areas such as translator / interpreter training and the study of language-based communication in general.
Audiodescription (AD) is a growing arts and media access service for visually impaired people. As a practice rooted in intermodal mediation, i.e. ’translating’ visual images into verbal descriptions, it is in urgent need of interdisciplinary research-led grounding. Seeking to stimulate further research in this field, this paper aims to discuss the major dimensions of AD, give an overview of completed an ongoing research relating to each of these dimensions and outline questions for further academic study.
The translation of written language, the translation of spoken language and interpreting have traditionally been separate fields of education and expertise, and the technologies that emulate and/or support those human activities have been developed and researched using different methodologies and by different groups of researchers. Although recent increase in synergy between these well-established fields has begun to blur the boundaries, this section will adhere to the three-fold distinction and begin by giving an overview of key concepts in relation to written-language translation and technology, including computer-assisted translation (CAT) and fully automatic machine translation (MT). This will be followed by an overview of spoken-language translation and technology, which will make a distinction between written translation products (speech-to-text translation, STT) and spoken translation products (speech-to-speech translation, SST). The key concepts of information and communications technology (ICT) supported interpreting, which is currently separate from the technological developments in written- and spoken-language translation, will be outlined in a third section and a fourth will provide an overview of current usages of translation and interpreting technologies.
This special volume Here or There: Research on interpreting via video link aims to bring together a collection of international research on remote interpreting mediated by an audio-video link, covering both spoken language and sign-language interpreting experiences. There is still much to be learnt in the way we define and describe the needs of all stakeholders and how best to use the technology to enable interpreting services to function as intended. Like other areas of study we already see a number of discrepancies when it comes to interpreting by video link and we have yet to reach clear and conclusive answers. This chapter aims to give an overview of the emerging field of remote interpreting by video link and review the empirical research that has come from this sector.
The development of communication technologies such as telephony, videoconferencing and web-conferencing in interpreter-mediated communication has led to alternative ways of delivering interpreting services. Several uses of these technologies can be distinguished in connection with interpreting. ‘Remote interpreting’ in the narrow sense often refers to their use to gain access to an interpreter in another location, but similar methods of interpreting are required for interpreting in virtual meetings in which the primary participants themselves are distributed across different sites. In spite of their different underlying motivations, these methods of interpreting all share elements of remote working from the interpreter’s point of view and will therefore be subsumed here under one heading. Although the practice of remote interpreting (in all its forms) is controversial among interpreters, the last two decades have seen an increase in this practice in all fields of interpreting. As such, it has also caught the attention of scholars, who have begun to investigate remote interpreting, for example, with a view to the quality of the interpreter’s performance and a range of psychological and physiological factors. This chapter will begin by explaining the key terms and concepts associated with remote interpreting and then give an overview of the historical development and current trends of remote interpreting in supra-national institutions, legal, healthcare and other settings, referring to current and emerging practice and to insights from research. This will be followed by the presentations of recommendations for practice and an outlook at future directions of this practice and for research.