AI-Enhanced Game Accessibility and Inclusive Support (AEGIS)
Start date
September 2024End date
November 2025About the project
Summary
The AI-Enhanced Game Accessibility and Inclusive Support (AEGIS) project explores how audio description (AD) can make video games more accessible for blind and partially sighted players. With nearly a third of UK gamers identifying as disabled, the project addresses a growing need for inclusive game design and innovation.
AEGIS builds on collaborations with local game studios and national partners to investigate current barriers, understand user needs, and develop an accessibility model informed by best practices in media accessibility. The project will also explore how generative AI can support the creation and scaling of AD for games.
By improving accessibility awareness and tools, AEGIS aims to benefit disabled gamers, inform industry standards, and strengthen the UK’s leadership in accessible game design.
Objectives
- Identify Current Barriers: Map the main challenges and resources in implementing audio description
- Understand Stakeholder and User Needs: Work with developers, writers, and players through interviews, focus groups, and playtesting
- Develop an Accessibility Model: Adapt insights from film and TV AD, and explore how generative AI can support game accessibility.
Key Impacts
- For gamers: improved representation and quality of life through more inclusive play experiences.
- For industry: increased awareness and capability in accessible design, leading to new creative and economic opportunities.
- For professionals: growth in roles such as accessibility consultants, developers, testers, AD writers, and voice actors.
- For society: contribution to the UK’s national strategy to become a technology superpower and accessibility leader through new research on generative AI in game accessibility.
Outcomes
- A report on the state of the art in game audio description, including current challenges, user needs, best practices, and potential solutions.
- An accessibility model and practical toolkit to support the design and integration of AD in games.
- An in-person Game Accessibility Event and an online webinar to share findings and foster collaboration among researchers, developers, gamers, and accessibility advocates
People
Principal investigator
Dr Yuan Zou
Lecturer in Translation Studies
Biography
I am a Lecturer in Translation Studies at the University of Surrey's Centre for Translation Studies (CTS). My background spans audiovisual translation (AVT), interpreting, and post-editing. I hold a PhD in AVT from Queen's University Belfast (QUB) and an MTI in Translation and interpreting from Jilin University.
Before joining Surrey, I was teaching Interpreting and Translation at QUB, and I engaged in freelance work as a translator and interpreter. These experiences have been instrumental in shaping my research direction and pedagogical approach.
I am currently focused on the integration of language technologies in the fields of interpreting and audiovisual translation (AVT), with a keen interest in harnessing these advancements to improve digital accessibility. I am actively investigating innovative ways in which technology can be harnessed to support and improve access for individuals with disabilities, ensuring that digital content is more inclusive and accessible to all audiences.
Co-investigator
Professor Sabine Braun
Professor of Translation Studies; Director, Centre for Translation Studies; Co-Director, Surrey Institute for People-Centred AI
Biography
I am a Professor of Translation Studies, Director of the Centre for Translation Studies, and a Co-Director of the Surrey Institute for People-Centred Artificial Intelligence at the University of Surrey in the UK. From 2017 to 2021 I also served as Associate Dean for Research and Innovation in the Faculty of Arts and Social Sciences at the University of Surrey.
My research explores the integration and interaction of human and machine in translation and interpreting, for example to improve access to critical information, media content and vital public services such as healthcare and justice for linguistic-minority populations and other groups/people in need of communication support. My overarching interest lies in the notions of fairness, trust, transparency, and quality in relation to technology use in these contexts.
For over 10 years, I have led a programme of research that has involved cross-disciplinary collaboration with academic and non-academic partners to improve access to justice for linguistically diverse populations. Under this programme, I have investigated the use of video links in legal proceedings involving linguistic-minority participants and interpreters from a variety of theoretical and methodological perspectives. I have led several multi-national research projects in this field (AVIDICUS 1-3, 2008-16) while contributing my expertise in video interpreting to other projects in the justice sector (e.g. QUALITAS, 2012-14, Understanding Justice, 2013-16, VEJ Evaluation, 2018-20). I have advised the European Council Working Party on e-Law (e-Justice) and other justice-sector institutions in the UK and internationally on video interpreting in legal proceedings and have developed guidelines which have been reflected in European Council Recommendation 2015/C 250/01 on ‘Promoting the use of and sharing of best practices on cross-border videoconferencing’.
In other projects I have explored the use of videoconferencing and virtual reality to train users of interpreting services in how to communicate effectively through an interpreter IVY, 2011-3; EVIVA, 2014-15, SHIFT, 2015-18).
A further example of my work on accessibility is my research on audio description (video description) for visually impaired people. In the H2020 project MeMAD (2018-21) I have recently investigated the feasibility of (semi-)automating AD to improve access to media content that is not normally covered by human AD (e.g. social media content).
In 2019, the Research Centre I lead was awarded an ‘Expanding Excellence in England (E3)' grant (2019-24) by Research England to expand our research on human-machine integration in translation and interpreting. As part of this, I am currently leading and involved in a number of pilot studies aimed at better human-machine integration in different modalities of translation and interpreting.
The insights from my research have informed my teaching in interpreting and audiovisual translation on CTS’s MA programmes and the professional training courses that I have delivered (e.g. for the Metropolitan Police Service in London).
From 2018-2021 I was a member of the DIN Working Group on Interpreting Services and Technologies and co-authored the first standard on remote consecutive interpreting worldwide (DIN 8578). I am a member of the BSI Sub-committee Terminology. From 2018-2022, I was the series editor of the IATIS Yearbook (Routledge) and am currently associate series editor for interpreting of Elements in Translation and Interpreting (CUP) and a member of the Advisory Board of Interpreting (Benjamins). I was appointed to the sub-panel for Modern Languages and Linguistics for the Research Excellence Framework REF 2021.
Funder
Games and Innovation Nexus (GAIN) Proof-of-Concept (PoC) Funding, Research England CCF-RED GAIN award
Contact
For enquiries or potential collaboration on this topic please contact Dr Yuan Zou, the Principal Investigator of the project.
See other research projects carried out at the Centre for Translation Studies.
Related sustainable development goals
Research themes
Find out more about our research at Surrey: