Jaleh Delfani

Dr Jaleh Delfani


Postdoctoral Research Fellow in Translation and Multimodal Technologies
BSc (Environmental Engineering), MA (Translation Studies), PhD (Translation and Interpreting Studies)

Publications

Sabine Braun, Kim Starr, Jaleh Delfani, Liisa Tiittula, Jorma Laaksonen, Karel Braeckman, Dieter Van Rijsselbergen, Sasha Lagrillière, Lauri Saarikoski (2021)When Worlds Collide: AI-Created, Human-Mediated Video Description Services and the User Experience, In: HCI International 2021 - Late Breaking Papers: Cognition, Inclusion, Learning, and Culture. HCII 2021pp. 147-167 Springer

This paper reports on a user-experience study undertaken as part of the H2020 project MeMAD (‘Methods for Managing Audiovisual Data: Combining Automatic Efficiency with Human Accuracy’), in which multimedia content describers from the television and archive industries tested Flow, an online platform, designed to assist the post-editing of automatically generated data, in order to enhance the production of archival descriptions of film content. Our study captured the participant experience using screen recordings, the User Experience Questionnaire (UEQ), a benchmarked interactive media questionnaire and focus group discussions, reporting a broadly positive post-editing environment. Users designated the platform’s role in the collation of machine-generated content descriptions, transcripts, named-entities (location, persons, organisations) and translated text as helpful and likely to enhance creative outputs in the longer term. Suggestions for improving the platform included the addition of specialist vocabulary functionality, shot-type detection, film-topic labelling, and automatic music recognition. The limitations of the study are, most notably, the current level of accuracy achieved in computer vision outputs (i.e. automated video descriptions of film material) which has been hindered by the lack of reliable and accurate training data, and the need for a more narratively oriented interface which allows describers to develop their storytelling techniques and build descriptions which fit within a platform-hosted storyboarding functionality. While this work has value in its own right, it can also be regarded as paving the way for the future (semi)automation of audio descriptions to assist audiences experiencing sight impairment, cognitive accessibility difficulties or for whom ‘visionless’ multimedia consumption is their preferred option.

KIM STARR, SABINE BRAUN, JALEH DELFANI (2020)Taking a Cue From the Human: Linguistic and Visual Prompts for the Automatic Sequencing of Multimodal Narrative, In: Journal of Audiovisual Translation

Human beings find the process of narrative sequencing in written texts and moving imagery a relatively simple task. Key to the success of this activity is establishing coherence by using critical cues to identify key characters, objects, actions and locations as they contribute to plot development.