Keynote speaker

Machine translation and natural language processing in crisis response scenarios

MT and NLP have proven to be crucial technologies in crises, events such as earthquakes and pandemics. Crises, by their nature, happen suddenly, without warning, and often require relaying crucial information between parties. This can include aid requests from people on the ground, broadcast messages from authorities or aid providers to affected populations. In many crises, there is usually little to no lead time, making the logistics of organising human translators or developing relevant technologies, especially in situations where the language spoken on the ground is not spoken by the aid providers, difficult at best. A good example of such a scenario occurred in Haiti in 2010. The Haitian earthquake of that year devastated the island, levelling much of the infrastructure and putting tens of thousands of people in critical need of aid. The first aid providers to arrive (the US Navy and the Red Cross) had difficulty communicating with the local population since that population’s predominate language (over 85%) was Haitian Kreyól. Within days, the aid providers were being inundated with upwards of 5,000 requests for aid per hour, most of which were written in Kreyól. By rapidly combining NLP technologies with a large crowd of Kreyól speakers, aid providers were able to save thousands of lives. The lesson learned by this crisis is that a combination of both human skill (e.g., for translation, geolocation) and computer technology allowed for rapid messaging, translation, triaging, and response. The lesson has broader implications for similar scenarios that may be less time sensitive, but no less critical, such as for pandemic or humanitarian response.

Human and machine translation: a meeting of modes and minds?

Convergence of human and machine translation appears to be happening on a number of fronts: machine learning algorithms build translation models based on translations and originals produced by humans, and machine translated outputs carry the material traces of such human compositions, while simultaneously nudging human post-editors towards the machine’s renderings. The devices and technical platforms used in much translation allow easy integration of both human and machine modes, and human, machine, and post-edited translation can all be accommodated in single quality assessment frameworks. Professional and trainee translators are now systematically incorporating machine translation into their practice, and machine translation is being deployed in areas previously thought to be the preserve of humans. But theorizing about translation has yet to catch up with this meeting of modes, with reflection on how established translation theory can integrate machine translation still relatively sparse, and some post-humanist commentary leap-frogging convergence altogether to focus on a time when the machine will supposedly supplant the human. The artificial intelligence community, for its part, often frames the relationship between human and machine translation as one of competition rather than collaboration, with the machine seen as constantly gaining ground on and sometimes overtaking the human. Meanwhile, concepts travel only slowly between human and machine translation studies, sometimes arriving depleted from the journey, and the research communities remain relatively separate. In this lecture I explore these interconnected strands, asking to what extent we can say that human and machine translation are converging? In what ways? And in what contexts? Are human and machine translation practice coming together to form a new whole? And if so, can human translation studies and machine translation research achieve the same feat? In short, can the meeting of modes be accompanied by a meeting of minds?