University of Washington
William Lewis is an Affiliate Assistant Professor at the University of Washington. Until recently he was the Principal PM Architect with the Microsoft Translator team and had led the team's efforts to build Machine Translation engines for a variety of the world's languages, including threatened and endangered languages. More recently he had been working with the Translator team on Speech Translation and Transcription, developing the features that allow students to use Speech Translation in the classroom, for both multilingual and deaf and hard of hearing audiences. One of Will’s current research projects, formulated over the years since his work during the Haitian crisis of 2010 (one of the foci of this talk), and with faculty from several other universities, is to develop an infrastructure to support the use of language technologies in crisis, called Language Technology for Crisis Response (LT4CR). Before joining Microsoft, Will was Assistant Professor and founding faculty for the Computational Linguistics Master's Program at the University of Washington. Before that, he was faculty at California State University Fresno, where he helped found the Computational Linguistic and Cognitive Science Programs at the university. He received a Bachelor's degree in Linguistics from the University of California Davis and a Master's and Doctorate in Linguistics, with an emphasis in Computational Linguistics, from the University of Arizona in Tucson.
Machine translation and natural language processing in crisis response scenarios
MT and NLP have proven to be crucial technologies in crises, events such as earthquakes and pandemics. Crises, by their nature, happen suddenly, without warning, and often require relaying crucial information between parties. This can include aid requests from people on the ground, broadcast messages from authorities or aid providers to affected populations. In many crises, there is usually little to no lead time, making the logistics of organising human translators or developing relevant technologies, especially in situations where the language spoken on the ground is not spoken by the aid providers, difficult at best. A good example of such a scenario occurred in Haiti in 2010. The Haitian earthquake of that year devastated the island, levelling much of the infrastructure and putting tens of thousands of people in critical need of aid. The first aid providers to arrive (the US Navy and the Red Cross) had difficulty communicating with the local population since that population’s predominate language (over 85%) was Haitian Kreyól. Within days, the aid providers were being inundated with upwards of 5,000 requests for aid per hour, most of which were written in Kreyól. By rapidly combining NLP technologies with a large crowd of Kreyól speakers, aid providers were able to save thousands of lives. The lesson learned by this crisis is that a combination of both human skill (e.g., for translation, geolocation) and computer technology allowed for rapid messaging, translation, triaging, and response. The lesson has broader implications for similar scenarios that may be less time sensitive, but no less critical, such as for pandemic or humanitarian response.
Dublin City University
Dorothy Kenny is full professor of translation studies at Dublin City University. She holds a BA in French and German from DCU and an MSc in machine translation and a PhD in language engineering, both from the University of Manchester. Her current research interests include corpus-based analyses of translation and translator style, literary applications of machine translation and approaches to the teaching of translation technology. From September 2019 to August 2022 she was principal investigator on MultiTraiNMT, a European-Union funded strategic partnership that aimed to create and disseminate innovative materials for teaching and learning about machine translation. Her recent publications include the edited volumes Machine translation for everyone: empowering users in the age of artificial intelligence (Language Science Press, 2022), Fair MT: Towards ethical, sustainable Machine Translation (a special issue of Translation Spaces 9(1), coedited with Joss Moorkens and Félix do Carmo in 2020) and Human Issues in Translation Technology (Routledge, 2017). Professor Kenny is co-editor of the journal Translation Spaces and an Honorary Fellow of the Chartered Institute of Linguists.
Human and machine translation: a meeting of modes and minds?
Convergence of human and machine translation appears to be happening on a number of fronts: machine learning algorithms build translation models based on translations and originals produced by humans, and machine translated outputs carry the material traces of such human compositions, while simultaneously nudging human post-editors towards the machine’s renderings. The devices and technical platforms used in much translation allow easy integration of both human and machine modes, and human, machine, and post-edited translation can all be accommodated in single quality assessment frameworks. Professional and trainee translators are now systematically incorporating machine translation into their practice, and machine translation is being deployed in areas previously thought to be the preserve of humans. But theorizing about translation has yet to catch up with this meeting of modes, with reflection on how established translation theory can integrate machine translation still relatively sparse, and some post-humanist commentary leap-frogging convergence altogether to focus on a time when the machine will supposedly supplant the human. The artificial intelligence community, for its part, often frames the relationship between human and machine translation as one of competition rather than collaboration, with the machine seen as constantly gaining ground on and sometimes overtaking the human. Meanwhile, concepts travel only slowly between human and machine translation studies, sometimes arriving depleted from the journey, and the research communities remain relatively separate. In this lecture I explore these interconnected strands, asking to what extent we can say that human and machine translation are converging? In what ways? And in what contexts? Are human and machine translation practice coming together to form a new whole? And if so, can human translation studies and machine translation research achieve the same feat? In short, can the meeting of modes be accompanied by a meeting of minds?