Dr Elena Davitti
Academic and research departments
Centre for Translation Studies, Literature and Languages, Faculty of Arts, Business and Social Sciences.About
Biography
I am an Associate Professor in Translation Studies with expertise in interpreting, both conference and dialogue. I am also Programme Leader of the MA Interpreting (Multilingual pathway) and MA Translation and Interpreting offered by the Centre for Translation Studies (CTS) where I am based. I hold a PhD in Translation and Intercultural Studies from the University of Manchester and an MA in Conference Interpreting from the University of Bologna at Forlì. Before joining Surrey in 2013, I practised as a freelance interpreter and translator and worked as interpreter trainer at different universities both in the UK and in Italy, such as the University of Leeds, University of Birmingham, University of Macerata and UNINT, Rome. I am currently working on hybrid modalities at the crossroads of traditional disciplines such as translation, interpreting , subtitling, with a particular interest in real-time speech-to-text communication across languages.
Areas of specialism
University roles and responsibilities
- Programme Leader of MA Interpreting (Multilingual Pathway)
- Programme Leader of MA Translation and Interpreting
My qualifications
News
In the media
ResearchResearch interests
My research interests revolve around technology-enabled methods, modalities and practices of multilingual spoken communication, namely:
- Real-time speech-to-text transfer across languages via:
- interlingual respeaking, as a hybrid modality combining advances in speech recognition technology with human interpreting and subtitling skills to improve access to multilingual audiovisual content for a wider audience, its impact on language professional and traditional practices; required skills, abilities and traits; output quality; broader socio-economic impact;
- alternative (semi-)automated workflows integrating speech recognition and/or machine translation to deliver the same service, especially comparison of the fit-for-purposeness in different contexts; human role in increasingly technologised workflow;
- Interpreting in all its modes (conference, dialogue) and modalities (face-to-face, video-mediated, hybrid), with particular emphasis on communicative dynamics, impact on interpreters and participation dynamics, interpreting quality, traditional professional practice, professionalisation and interpreter education/upskilling;
- Multimodal and interactional approaches to interpreter-mediated interaction, both face-to-face (particularly in educational and medical settings) and technology-mediated.
Research projects
The recent worldwide audiovisual content boom has led to an ever-increasing demand for such content to be made accessible in real-time, in different languages, and for a wide audience in a variety of settings, including television, conferences and other live events (e.g. lectures, museum tours, business meetings, medical appointments). The SMART project (Shaping Multilingual Access through Respeaking Technology), funded by the ESRC UK (Economic and Social Research Council UK, ES/T002530/1, 2020-2023), addresses the urgent challenge to deliver high-quality real-time speech-to-text services across languages by exploring a new practice, interlingual respeaking. SMART will investigate how this technique can be fine-tuned to best produce subtitles in a different language, and the impact this may have on society.
Real-time subtitling is an advanced technique based on the interaction between a human and speech recognition software, making live content accessible to deaf and hard of hearing audiences. Demand for these services is increasing rapidly, with a particular growth area emerging in live interlingual subtitling, where content is subtitled from one language to another.
This ESRC IAA-funded project will be helping to identify current barriers to the uptake and development of interlingual subtitling, which include an acute lack of trained professionals who are able to deliver the service, and address these by producing a certified training offering.
The project will further the impact of existing ESRC funded research, the SMART project, which focused on an emerging technique for live subtitling - respeaking - the leading method for live intralingual subtitles for live events of all kinds.
By partnering with key stakeholders including broadcasters and subtitling providers, this new project will build on an ‘upskilling-for-testing' course prototype developed by the SMART team, with the aim of turning this into a fully-fledged, adaptable and customisable continuous professional development (CPD) model. The long term goal is to upskill language professionals to reduce the current skills barrier that prevents widespread adoption of live interlingual subtitling across the globe.
Research interests
My research interests revolve around technology-enabled methods, modalities and practices of multilingual spoken communication, namely:
- Real-time speech-to-text transfer across languages via:
- interlingual respeaking, as a hybrid modality combining advances in speech recognition technology with human interpreting and subtitling skills to improve access to multilingual audiovisual content for a wider audience, its impact on language professional and traditional practices; required skills, abilities and traits; output quality; broader socio-economic impact;
- alternative (semi-)automated workflows integrating speech recognition and/or machine translation to deliver the same service, especially comparison of the fit-for-purposeness in different contexts; human role in increasingly technologised workflow;
- Interpreting in all its modes (conference, dialogue) and modalities (face-to-face, video-mediated, hybrid), with particular emphasis on communicative dynamics, impact on interpreters and participation dynamics, interpreting quality, traditional professional practice, professionalisation and interpreter education/upskilling;
- Multimodal and interactional approaches to interpreter-mediated interaction, both face-to-face (particularly in educational and medical settings) and technology-mediated.
Research projects
The recent worldwide audiovisual content boom has led to an ever-increasing demand for such content to be made accessible in real-time, in different languages, and for a wide audience in a variety of settings, including television, conferences and other live events (e.g. lectures, museum tours, business meetings, medical appointments). The SMART project (Shaping Multilingual Access through Respeaking Technology), funded by the ESRC UK (Economic and Social Research Council UK, ES/T002530/1, 2020-2023), addresses the urgent challenge to deliver high-quality real-time speech-to-text services across languages by exploring a new practice, interlingual respeaking. SMART will investigate how this technique can be fine-tuned to best produce subtitles in a different language, and the impact this may have on society.
Real-time subtitling is an advanced technique based on the interaction between a human and speech recognition software, making live content accessible to deaf and hard of hearing audiences. Demand for these services is increasing rapidly, with a particular growth area emerging in live interlingual subtitling, where content is subtitled from one language to another.
This ESRC IAA-funded project will be helping to identify current barriers to the uptake and development of interlingual subtitling, which include an acute lack of trained professionals who are able to deliver the service, and address these by producing a certified training offering.
The project will further the impact of existing ESRC funded research, the SMART project, which focused on an emerging technique for live subtitling - respeaking - the leading method for live intralingual subtitles for live events of all kinds.
By partnering with key stakeholders including broadcasters and subtitling providers, this new project will build on an ‘upskilling-for-testing' course prototype developed by the SMART team, with the aim of turning this into a fully-fledged, adaptable and customisable continuous professional development (CPD) model. The long term goal is to upskill language professionals to reduce the current skills barrier that prevents widespread adoption of live interlingual subtitling across the globe.
Supervision
Postgraduate research supervision
I welcome enquiries from prospective PhD candidates with projects in the following areas:
- Spoken language interpreting (all modes and modalities)
- Hybrid modalities of speech-to-text transfer via speech recognition, especially interlingual respeaking
- Semi-/fully-automated workflows for interlingual speech-to-text in real time
- Interpreting technologies
- Video-mediated interpreting (all modes and configurations)
- Multimodal approahches to interpreter-mediated interaction
- Micro-analytical and empirical analysis of communicative and interactional dynamics in interpreting
- Interpreter education, upskilling and professionalisation
Current PhD students
Main supervisor
- Radić, Željko .Integrating speech recognition technology and human subtitling skills for the translation of interlingual subtitles
- Madell, Soumely. Technology-enhanced multilingual healthcare communication in the NHS maternity setting
Co-supervisor
- Rodríguez González, Eloy. The use of speech recognition in remote simultaneous interpreting
- Saeed, Muhammad Ahmed. The role of presence in remote simultaneous interpreting
- Zhang, Wei. Patient-centred approaches in medical interpreting.
Completed PhD project
- Gabrych, Marta. (2019). Quality Assessment of Interpreting in Polish-English Police-Suspect Interviews.
- Carpi, Beatrice (2018). Systematizing the Analysis of Songs in Stage Musicals for Translation: A Multimodal Model Based on Themes.
- Al-Jabri, Hanan (2017). TV Simultaneous Interpreting of Arabic Presidential Speeches into English During the Arab Spring.
Publications
In this paper, we present a semi-automated workflow for live interlingual speech-to-text communication which seeks to reduce the shortcomings of existing ASR systems: a human respeaker works with a speaker-dependent speech recognition software (e.g., Dragon Naturally Speaking) to deliver punctuated same-language output of superior quality than obtained using out-of-the-box automatic speech recognition of the original speech. This is fed into a machine translation engine (the EU's eTranslation) to produce live-caption ready text. We benchmark the quality of the output against the output of best-in-class (human) simultaneous interpreters working with the same source speeches from plenary sessions of the European Parliament. To evaluate the accuracy and facilitate the comparison between the two types of output, we use a tailored annotation approach based on the NTR model (Romero-Fresco and Pochhacker, 2017). We find that the semi-automated workflow combining intralingual respeaking and machine translation is capable of generating outputs that are similar in terms of accuracy and completeness to the outputs produced in the benchmarking workflow, although the small scale of our experiment requires caution in interpreting this result.
The COVID-19 pandemic has accelerated the growth of remote interpreting, yet research on several aspects of remote medical interpreting (RMI) remains limited. Against this backdrop, this study reports key findings from a survey of professional healthcare interpreters with experience in RMI (N=47), addressing various gaps in RMI research, including interlocutor distribution and technology use, factors affecting interpreters’ perceived impact of RMI on their performance, medical settings in which RMI is used, and working conditions. Results indicate that most interpreters have experience with both telephone interpreting (TI) and video interpreting (VI) in the healthcare context, encountering various medical settings, distribution patterns and technological configurations. Quantitative findings reveal four similar normative configurations of interlocutor distribution in both TI and VI, each with slightly different normative technologies. TI is perceived to have a more negative impact on overall performance than VI, which receives more positive evaluations regarding source text comprehension, target text production, rapport between interlocutors, concentration, stress, and fatigue. Qualitative results reveal common challenges shared by TI and VI, with COVID-19 exacerbating some of them. This study contributes to establishing a systematic understanding of the complexity of RMI across multiple dimensions and provides a nuanced perspective on both TI and VI.
We report on a study evaluating the educational opportunities that highly multimodal and interactive Virtual Learning Environments (VLE) provide for collaborative learning in the context of interpreter education. The study was prompted by previous research into the use of VLEs in interpreter education, which showed positive results but which focused on preparatory or ancillary activities and/or individual interpreting practice. The study reported here, which was part of a larger project on evaluating the use of VLEs in educating interpreters and their potential clients, explored the affordances of a videoconferencing platform and a 3D virtual world for collaborative learning in the context of dialogue interpreting. The participants were 13 student-interpreters, who conducted role-play simulations in both environments. Through a mix of methods such as non-participant observation, reflective group discussions, linguistic analysis of the recorded simulations, and a user experience survey several dimensions of using the VLEs were explored including the linguistic/discursive dimension (interpreting), the interactional dimension (communication management between the participants), the ergonomic dimension (human-computer interaction) and the psychological dimension (user experience, sense of presence). Both VLEs were found to be capable of supporting situated and autonomous learning in the interpreting context, although differences arose regarding the reported user experience.
Remote simultaneous interpreting (RSI) draws on Information and Communication Technologies to facilitate multilingual communication by connecting conference interpreters to in-presence, virtual or hybrid events. Early solutions for RSI involved interpreters working in interpreting booths with ISOstandardised equipment. However, in recent years, cloud-based solutions for RSI have emerged, with innovative Simultaneous Interpreting Delivery Platforms (SIDPs) at their core, enabling RSI delivery from anywhere. SIDPs recreate the interpreter's console and work environment (Braun 2019) as a bespoke software/videoconferencing platform with interpretation-focused features. Although initial evaluations of SIDPs were conducted before the Covid-19 pandemic (e.g., DG SCIC 2019), research on RSI (booth-based and software-based) remains limited. Pre-pandemic research shows that RSI is demanding in terms of information processing and mental modelling (Braun 2007; Moser-Mercer 2005), and suggests that the limited visual input available in RSI constitutes a particular problem (Mouzourakis 2006; Seeber et al. 2019). Besides, initial explorations of the cloud-based solutions suggest that there is room for improving the interfaces of widely used SIDPs (Bujan and Collard 2021; DG SCIC 2019). The experimental project presented in this paper investigates two aspects of SIDPs: the design of the interpreter interface and the integration of supporting technologies. Drawing on concepts and methods from user experience research and human-computer interaction, we explore what visual information is best suited to support the interpreting process and the interpreter-machine interaction, how this information is best presented in the interface, and how automatic speech recognition can be integrated into an RSI platform to aid/augment the interpreter's source-text comprehension.
AI-related technologies used in the language industry, including automatic speech recognition (ASR) and machine translation (MT), are designed to improve human efficiency. However, humans are still in the loop for accuracy and quality, creating a working environment based on Human-AI Interaction (HAII). Very little is known about these newly-created working environments and their effects on cognition. The present study focused on a novel practice, interlingual respeaking (IRSP), where real-time subtitles in another language are created through the interaction between a human and ASR software. To this end, we set up an experiment that included a purpose-made training course on IRSP over 5 weeks, investigating its effects on cognition, and focusing on executive functioning (EF) and working memory (WM). We compared the cognitive performance of 51 language professionals before and after the course. Our variables were reading span (a complex WM measure), switching skills, and sustained attention. IRSP training course improved complex WM and switching skills but not sustained attention. However, the participants were slower after the training, indicating increased vigilance with the sustained attention tasks. Finally, complex WM was confirmed as the primary competence in IRSP. The reasons and implications of these findings will be discussed.
Remote Simultaneous Interpreting (RSI) platforms enable interpreters to provide their services remotely and work from various locations. However , research shows that interpreters perceive interpreting via RSI platforms to be more challenging than on-site interpreting in terms of performance and working conditions [1]. While poor audio quality is a major concern for RSI [2,3], another issue that has been frequently highlighted is the impact of the interpreter's visual environment on various aspects of RSI. However, this aspect has received little attention in research. The study reported in this article investigates how various visual aids and methods of presenting visual information can aid interpreters and improve their user experience (UX). The study used an experimental design and tested 29 professional conference interpreters on different visual interface options, as well as eliciting their work habits, perceptions and working environments. The findings reveal a notable increase in the frequency of RSI since the beginning of the COVID-19 pandemic. Despite this increase, most participants still preferred on-site work. The predominant platform for RSI among the interpreters sampled was Zoom, which has a minimalist interface that contrasts with interpreter preferences for maximalist, information-rich be-spoke RSI interfaces. Overall, the study contributes to supporting the visual needs of interpreters in RSI.
Interlingual Subtitle Voicing (ISV) is a new technique that focuses on using speech recognition (SR), rather than traditional keyboard-based techniques for the creation of non-live subtitles. SR has successfully been incorporated into intralingual live subtitling environments for the purposes of accessibility in major languages (real-time subtitles for the deaf and hard of hearing). However, it has not yet been integrated as a helpful tool for the translation of non-live subtitles to any great and meaningful extent, especially for lower resourced languages likeCroatian. This paper presents selected results from a larger PhD study entitled ‘Interlingual Subtitle Voicing: A New Technique for the Creation of Interlingual Subtitles, A Case Study in Croatian’. More specifically, the paper focuses on the second supporting research question that explores participants feedback about the ISV technique, as a novel workflow element, and the accompanying technology. To explore this technique, purpose-made subtitling software was created, namely SpeakSubz. The constant enhancements of the tool akin to software updates are informed by participants’ empirical results and qualitative feedback and shaped by subtitlers’ needs. Some of the feedback from the main ISV study is presented in this paper.
The emergence of Simultaneous Interpreting Delivery Platforms (SIDPs) has opened up new opportunities for interpreters to provide cloud-based remote simultaneous interpreting (RSI) services. Similar to booth-based RSI, which has been shown to be more tiring than conventional simultaneous interpreting and more demanding in terms of information processing and mental modelling [11; 12], cloud-based RSI configurations are perceived as more stressful than conventional simultaneous interpreting and potentially detrimental to interpreting quality [2]. Computer-assisted interpreting (CAI) tools, including automatic speech recognition (ASR) [8], have been advocated as a means to support interpreters during cloud-based RSI assignments, but their effectiveness is under-explored. The study reported in this article experimentally investigated the impact of providing interpreters with access to an ASR-generated live transcript of the source speech while they were interpreting, examining its effect on their performance and overall user experience. As part of the experimental design, 16 professional conference interpreters performed a controlled interpreting test consisting of a warmup speech (not included in the analysis), and four speeches, i.e., two lexically dense speeches and two fast speeches, presented in two different interpreting conditions, i.e., with and without ASR support. This article presents initial quantitative findings from the analysis of the interpreters' performance, which was conducted using the NTR Model [17]. Overall, the findings reveal a reduction in the total number of interpreting errors in the ASR condition. However , this is accompanied by a loss in stylistic quality in the ASR condition.
The recent global surge in audiovisual content has emphasized the importance of accessibility for wider audiences. The SMART project addressed this by exploring interlingual respeaking, a novel practice combining speech recognition technology with human interpreting and subtitling skills to produce real-time, high-quality speech-to-text services across languages. This method evolved from intralingual respeaking, which is widely used in broadcasting to create live subtitles for the deaf and hard-of-hearing. Interlingual respeaking, which involves translating live content into another language and subtitling it, could revolutionize subtitle production for foreign-language content, overcoming sensory and language barriers.. Interlingual respeaking is defined as a type of simultaneous interpreting, producing text with minimal delay. It involves two shifts: interlingual (from one language to another) and intermodal (from spoken to written). This practice combines the challenges of simultaneous interpreting with the requirements of subtitling. Respeakers must accurately convey messages in another language to a speech recognition system, adding punctuation and making real-time edits for clarity and readability. This method leverages speech recognition technology and human translation skills to ensure efficient and high-quality translated subtitles.. Interlingual respeaking offers immense potential for making multilingual content accessible to international and hearing-impaired audiences. It's particularly relevant for television, conferences, and live events. However, research into its feasibility, accuracy, and the skills required for language professionals is still in its early stages.. The SMART project aimed to address these research gaps. It focused on the cognitive and interpersonal profiles needed for successful interlingual respeaking. The project extended a pilot study, including language professionals from interpreting, subtitling, translation, and intralingual respeaking, to explore how cognitive and interpersonal factors influence learning and performance in this field.. The SMART project's main goals were to study interlingual respeaking's complexity, focusing on the acquisition and implementation of relevant skills, and the accuracy of the final subtitles. The research involved 23 postgraduate students with backgrounds in interpreting, subtitling, and intralingual respeaking.. The research program examined three areas: process, product, and upskilling. It sought to understand the variables contributing to language professionals' performance, challenges faced during performance, and how performance can be sustained. Regarding the product, it aimed to identify factors affecting the accuracy of interlingual respeaking and the impact of various individual and content characteristics on accuracy. For upskilling, the focus was on the challenges and strengths of the training course.. Key findings included the importance of working memory in predicting high performance and the enhancement of certain cognitive abilities through training. Interpersonal traits like conscientiousness and integrated regulation were also examined. In terms of product accuracy, the average was 95.37%, with omissions being the strongest negative predictor of accuracy. High performers outperformed low performers across all scenarios.. The upskilling course was innovative, focusing on modular training and combining intralingual and interlingual practices. It addressed real-world challenges and was tailored to different professional backgrounds. The approach proved effective, with 82% of participants finding the course met their expectations and 86% acknowledging its challenging nature. The study confirmed the benefits of a modular and personalized training approach, highlighting the need for flexibility and adaptability to different skill levels and backgrounds.
This paper presents the key findings of the pilot phase of SMART (Shaping Multilingual Access though Respeaking Technology), a multidisciplinary international project focusing on interlingual respeaking (IRSP) for real-time speech-to-text. SMART addresses key questions around IRSP feasibility, quality and competences. The pilot project is based on experiments involving 25 postgraduate students who performed two IRSP tasks (English-Italian) after a crash course. The analysis triangulates subtitle accuracy rates with participants’ subjective ratings and retrospective self-analysis. The best performers were those with a composite skillset, including interpreting/subtitling and interpreting/subtitling/respeaking. Participants indicated multitasking, time-lag, and monitoring of the speech recognition software output as the main difficulties; together with the great variability in performance, personal traits emerged as likely to affect performance. This pilot lays the conceptual and methodological foundations for a larger project involving professionals, to address a set of urgent questions for the industry.
This paper examines the work of project managers in two UK-based translation companies. Drawing on participant observation, interviews, and artifacts from field sites, our analysis focuses on the ways in which trust is developed and maintained in the relationships that project managers build, on the one hand, with the clients who commission them to undertake translation projects, and, on the other, with freelance translators who perform the translation work. The project manager’s ability both to confer and to instill trust is highlighted as key to the successful operation of the company. Conceptualizing trust as a dynamic process, we consider what this process of trusting entails in this context: positive expectations visà-vis the other parties; willingness to expose oneself to vulnerabilities; construction of bases for suspending doubts and uncertainties (leaps of faith). We observe the important role of communication and discursive strategies in building and maintaining trust and draw conclusions for translator education.
In the last two decades, Dialogue Interpreting (DI) has been studied extensively through the lenses of discourse analysis and conversation analysis. As a result, DI has been recognised as an interactional communicative event, in which all the participants jointly and actively collaborate. Nevertheless, most of these studies focused merely on the verbal level of interaction, whereas its multimodal dimension has not received much attention so far, and the literature on this subject is still scarce and dispersed. By analysing and comparing two sequences, taken from a corpus of face-to-face interpreter-mediated encounters in pedagogical settings, this study aims at showing how multimodal analysis can contribute to a deeper understanding of the interactional dynamics of DI. In particular, the paper shows how participants employ multimodal resources (gaze, gesture, body position, proxemics, object manipulation) to co-construct different participation frameworks throughout the encounters, and how the “ecology of action” (i.e., the relationships between the participants and the surrounding environment) influences the development of interaction.
In recent years, conversation analysts have developed a growing interest in the Applied branch of Conversation Analysis (CA). Authors such as Antaki, Heritage and Richards and Seedhouse have explored the practical applications of CA in institutional contexts, to shed light on their dynamics and to suggest improvements in the services provided. On the other hand, over the past two decades, interactionally oriented methodologies have been successfully applied to the study of interpreter-mediated interaction. Nevertheless, the potential of CA for interpreter training has not been fully explored, especially with regard to the impact of multimodal semiotic resources (gaze, gesture, posture) on triadic communication. This paper illustrates the results of an exploratory study in Applied CA conducted within a postgraduate interpreting module at an Italian university. Four different extracts of interpreter-mediated encounters, video-recorded in real-life settings, were submitted to the students in order to test their reactions, guide them in analysis and raise their awareness of the problems and challenges posed by real-life scenarios. Through the triangulation of findings from recorded classroom discussion and questionnaires, the present paper discusses the implications of using CA in interpreter education.
Video Remote Interpreting (VRI) is a modality of interpreting where the interpreter interacts with the other parties-at-talk through an audiovisual link without sharing the same physical interactional space. In dialogue settings, existing research on VRI has mostly drawn on the analysis of verbal behaviour to explore the complex dynamics of these ‘triadic’ exchanges. However, understanding the complexity of VRI requires a more holistic analysis of its dynamics in different contexts as a situated, embodied activity where resources other than talk (such as gaze, gestures, head and body movement) play a central role in the co-construction of the communicative event. This paper draws on extracts from a corpus of VRI encounters in collaborative contexts (e.g. nurse-patient interaction, customer services) to investigate how specific interactional phenomena which have been explored in traditional settings of dialogue interpreting (e.g. turn management, dyadic sequences, spatial management) unfold in VRI. In addition, the paper will identify the coping strategies implemented by interpreters to deal with various challenges. This fine-grained, microanalytical look at the data will complement the findings provided by research on VRI in legal/adversarial contexts and provide solid grounds to evaluate the impact of different moves. Its systematic integration into training will lead to a more holistic approach to VRI education.
Research in Dialogue Interpreting (DI) has traditionally drawn on qualitative analysis of verbal behaviour to explore the complex dynamics of these ‘triadic’ exchanges. Less attention has been paid to interpreter-mediated interaction as a situated, embodied activity where resources other than talk (such as gaze, gestures, head and body movement, proxemics) play a central role in the co-construction of the communicative event. This article argues that understanding the complexity of DI requires careful investigation of the interplay between multiple interactional resources, i.e. verbal in conjunction with visual, aural, embodied and spatial meaning-making resources. This call for methodological innovation is strengthened by the emergence of video-mediated interpreting, where interacting via screens without sharing the same physical space adds a further layer of complexity to interactional dynamics. Drawing on authentic extracts from interpreter-mediated interaction, both face-to-face and video-mediated, this article problematizes how the integration of a multimodal perspective into qualitative investigation of interpreter-mediated interaction can contribute to the advancement of our understanding of key interactional dynamics in DI and, in turn, broaden the scope of multimodality to include new, uncharted territory.
Research on dialogue interpreting shows that interpreters do not simply convey speech content, but also perform crucial coordinating and mediating functions. This descriptive study, which is based on PhD research conducted at the University of Manchester, explores the activity of qualified dialogue interpreters in three video-recorded parent-teacher meetings involving immigrant mothers. English and Italian are the languages used, the meetings having taken place in the UK (one case) and Italy (two cases). The study focuses on interpreters' handling of evaluative assessment, in many cases introduced by them in the target speech as an "upgrading rendition". Transcribed extracts are examined in a micro-analytical perspective, the dynamics of each actor's (dis)engagement towards interlocutors being studied in relation to gaze patterns annotated by dedicated software. Results show that the interpreter actively promotes alignment between the parties; however, s/he often does so by emphasising positive considerations to the mother. The outcome of this approach is that the mother accepts, but is not encouraged to co-construct a negotiated solution: she is assimilated, not empowered. © John Benjamins Publishing Company.
This book examines how researchers of discourse analysis could best disseminate their work in real world settings. The chapters include studies on spoken and written discourse using various analysis techniques, and the authors discuss how they could best engage professional practice in their work. Techniques used include Conversation Analysis in combination with other methods, genre analysis in combination with other methods, and Critical Discourse Analysis. Contributions are loosely grouped by setting and include the following settings: workplace and business; education; private and public; and government and media. The volume aims to link the end of research and the onset of praxis by creating collaboration with the places of practice, helping analysts to move forward with ideas for dissemination, collaboration and even intervention. The book will be of interest to all researchers conducting discourse analysis in professional settings.
This volume focuses on multimodality in various communicative settings, with special attention to how non-verbal elements reinforce and add meaning to verbal expressions. The first part of the book explores issues related to the use of multimodal resources in educational interactions and English language classroom teaching, also involving learners with disabilities. The second part, on the other hand, investigates multimodality as a key component of communication that takes place in different specialized domains and genres. The book reflects a variety of methodological approaches that are grounded in both quantitative and qualitative techniques. These include multimodal discourse analysis, multimodal transcription, and multimodal annotation software capable of representing the interplay of different semiotic modes, such as speech, intonation, direction of gaze, facial expressions, gestures and spatial positioning of interlocutors.
The Routledge Encyclopedia of Interpreting Studies is the authoritative reference for anyone with an academic or professional interest in interpreting. Drawing on the expertise of an international team of specialist contributors, this single-volume reference presents the state of the art in interpreting studies in a much more fine-grained matrix of entries than has ever been seen before. For the first time all key issues and concepts in interpreting studies are brought together and covered systematically and in a structured and accessible format. With all entries alphabetically arranged, extensively cross-referenced and including suggestions for further reading, this text combines clarity with scholarly accuracy and depth, defining and discussing key terms in context to ensure maximum understanding and ease of use. Practical and unique, this Encyclopedia of Interpreting Studies presents a genuinely comprehensive overview of the fast growing and increasingly diverse field of interpreting studies.
In the last two decades, empirical research has shed light on the interactional dynamics of Dialogue Interpreting (DI). Nevertheless, it remains unclear how the results of such research can be effectively integrated in interpreter education. This paper outlines a semester long module, in which research on DI is employed for teaching purposes. During the module, students are introduced to relevant literature and exposed to different case studies of interpreter-mediated interaction, based on authentic data. The aim is to create an understanding of the interpreter ’s role and conduct in a variety of communicative situations, and help students identify the challenges that may arise in interpreter-mediated interaction. Implications for current codes of conduct are also discussed.
This chapter reports the key findings of the European AVIDICUS 3 project,1 which focused on the use of video-mediated interpreting in legal settings across Europe. Whilst judicial and law enforcement authorities have turned to videoconferencing to minimise delays in legal proceedings, reduce costs and improve access to justice, research into the use of video links in legal proceedings has called for caution. Sossin and Yetnikoff (2007), for example, contend that the availability of financial resources for legal proceedings cannot be disentangled from the fairness of judicial decision-making. The Harvard Law School (2009: 1193) warns that, whilst the use of video links may eliminate delays, it may also reduce an individual’s “opportunity to be heard in a meaningful manner”. In proceedings that involve an interpreter, procedural fairness and “the opportunity to be heard in a meaningful manner” are closely linked to the quality of the interpretation. The use of video links in interpreter-mediated proceedings therefore requires a videoconferencing solution that provides optimal support for interpreting as a crucial prerequisite for achieving the ultimate goal, i.e. fairness of justice. Against this backdrop, the main aim of AVIDICUS 3 was to identify institutional processes and practices of implementing and using video links in legal proceedings and to assess them in terms of how they accommodate and support bilingual communication mediated through an interpreter. The focus was on spoken-language interpreting. The project examined 12 European jurisdictions (Belgium, Croatia, England and Wales, Finland, France, Hungary, Italy, the Netherlands, Poland, Scotland, Spain and Sweden). An ethnographic approach was adopted to identify relevant practices, including site visits, in-depth and mostly in-situ interviews with over 100 representatives from different stakeholder groups, observations of real-life proceedings, and the analysis of a number of policy documents produced in the justice sector. The chapter summarises and systematises the findings from the jurisdictions included in this study. The assessment focuses on the use of videoconferencing in both national and cross-border proceedings, and covers different applications of videoconferencing in the legal system, including its use for links between courts and remote participants (e.g. witnesses, defendants in prison) and its use to access interpreters who work offsite (see Braun 2015; Skinner, Napier & Braun in this volume).
Additional publications
Korybski, T. E. Davitti, C. Orasan, and S. Braun (2022) A Semi-Automated Live Interlingual Communication Workflow Featuring Intralingual Respeaking: Evaluation and Benchmarking, Proceedings of the Language Resources and Evaluation Conference (LREC), June 2022, European Language Resources Association, pp. 4405--4413