VOICE: Virtual and Onsite Interpreting in Court Environments
Start date
2019End date
2024About the project
Summary
The Virtual and On-site Interpreting in Court Environments (VOICE) project investigates the impact of newly emerged remote and hybrid court configurations used in the criminal and family courts since the beginning of the COVID-19 pandemic. In line with initial evidence, which has highlighted that remote/hybrid hearings may be particularly challenging for vulnerable court participants, the VOICE study focuses on hearings involving participants from linguistic-minority backgrounds and legal interpreters.
The study will provide a synthesis of the post-COVID-19 transition to remote/hybrid hearings in the criminal and family courts, with specific emphasis on how these hearings are conducted when interpreters need to be integrated to assist proceedings with linguistic-minority participants. Special consideration is given to the courtroom configurations (i.e., the distribution of remote/onsite participants), communication media (telephone/video), and platforms that are used. Through an online survey and semi-structured interviews, the project elicits information and views from court participants to identify the impact of these new configurations, media and platforms, considering the ways in which the needs of linguistic-minority court users and interpreters have been accounted for (i.e., to ensure effective participation and procedural justice), any unintended consequences, as well as any further support that might be required.
Project Outcomes
The findings from this research will be used to develop guidelines concerning remote/hybrid hearings involving linguistic-minority court users and interpreters
People
Principle Investigator
Professor Sabine Braun
Professor of Translation Studies; Director, Centre for Translation Studies; Co-Director, Surrey Institute for People-Centred AI
Biography
I am a Professor of Translation Studies, Director of the Centre for Translation Studies, and a Co-Director of the Surrey Institute for People-Centred Artificial Intelligence at the University of Surrey in the UK. From 2017 to 2021 I also served as Associate Dean for Research and Innovation in the Faculty of Arts and Social Sciences at the University of Surrey.
My research explores the integration and interaction of human and machine in translation and interpreting, for example to improve access to critical information, media content and vital public services such as healthcare and justice for linguistic-minority populations and other groups/people in need of communication support. My overarching interest lies in the notions of fairness, trust, transparency, and quality in relation to technology use in these contexts.
For over 10 years, I have led a programme of research that has involved cross-disciplinary collaboration with academic and non-academic partners to improve access to justice for linguistically diverse populations. Under this programme, I have investigated the use of video links in legal proceedings involving linguistic-minority participants and interpreters from a variety of theoretical and methodological perspectives. I have led several multi-national research projects in this field (AVIDICUS 1-3, 2008-16) while contributing my expertise in video interpreting to other projects in the justice sector (e.g. QUALITAS, 2012-14, Understanding Justice, 2013-16, VEJ Evaluation, 2018-20). I have advised the European Council Working Party on e-Law (e-Justice) and other justice-sector institutions in the UK and internationally on video interpreting in legal proceedings and have developed guidelines which have been reflected in European Council Recommendation 2015/C 250/01 on ‘Promoting the use of and sharing of best practices on cross-border videoconferencing’.
In other projects I have explored the use of videoconferencing and virtual reality to train users of interpreting services in how to communicate effectively through an interpreter IVY, 2011-3; EVIVA, 2014-15, SHIFT, 2015-18).
A further example of my work on accessibility is my research on audio description (video description) for visually impaired people. In the H2020 project MeMAD (2018-21) I have recently investigated the feasibility of (semi-)automating AD to improve access to media content that is not normally covered by human AD (e.g. social media content).
In 2019, the Research Centre I lead was awarded an ‘Expanding Excellence in England (E3)' grant (2019-24) by Research England to expand our research on human-machine integration in translation and interpreting. As part of this, I am currently leading and involved in a number of pilot studies aimed at better human-machine integration in different modalities of translation and interpreting.
The insights from my research have informed my teaching in interpreting and audiovisual translation on CTS’s MA programmes and the professional training courses that I have delivered (e.g. for the Metropolitan Police Service in London).
From 2018-2021 I was a member of the DIN Working Group on Interpreting Services and Technologies and co-authored the first standard on remote consecutive interpreting worldwide (DIN 8578). I am a member of the BSI Sub-committee Terminology. From 2018-2022, I was the series editor of the IATIS Yearbook (Routledge) and am currently associate series editor for interpreting of Elements in Translation and Interpreting (CUP) and a member of the Advisory Board of Interpreting (Benjamins). I was appointed to the sub-panel for Modern Languages and Linguistics for the Research Excellence Framework REF 2021.