When Worlds Collide: AI-Created, Human-Mediated Video Description Services and the User Experience
No Thumbnail Available
Access rights
openAccess
URL
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Date
2021-07
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
21
147-167
147-167
Series
HCI International 2021 - Late Breaking Papers: Cognition, Inclusion, Learning, and Culture, Lecture Notes in Computer Science, Volume 13096
Abstract
This paper reports on a user-experience study undertaken as part of the H2020 project MeMAD (‘Methods for Managing Audiovisual Data: Combining Automatic Efficiency with Human Accuracy’), in which multimedia content describers from the television and archive industries tested Flow, an online platform, designed to assist the post-editing of automatically generated data, in order to enhance the production of archival descriptions of film content. Our study captured the participant experience using screen recordings, the User Experience Questionnaire (UEQ), a benchmarked interactive media questionnaire and focus group discussions, reporting a broadly positive post-editing environment. Users designated the platform’s role in the collation of machine-generated content descriptions, transcripts, named-entities (location, persons, organisations) and translated text as helpful and likely to enhance creative outputs in the longer term. Suggestions for improving the platform included the addition of specialist vocabulary functionality, shot-type detection, film-topic labelling, and automatic music recognition. The limitations of the study are, most notably, the current level of accuracy achieved in computer vision outputs (i.e. automated video descriptions of film material) which has been hindered by the lack of reliable and accurate training data, and the need for a more narratively oriented interface which allows describers to develop their storytelling techniques and build descriptions which fit within a platform-hosted storyboarding functionality. While this work has value in its own right, it can also be regarded as paving the way for the future (semi)automation of audio descriptions to assist audiences experiencing sight impairment, cognitive accessibility difficulties or for whom ‘visionless’ multimedia consumption is their preferred option.Description
| openaire: EC/H2020/780069/EU//MeMAD
Keywords
Other note
Citation
Braun, S, Starr, K, Delfani, J, Tiittula, L, Laaksonen, J, Braeckman, K, Van Rijsselbergen, D, Lagrillière, S & Saarikoski, L 2021, When Worlds Collide: AI-Created, Human-Mediated Video Description Services and the User Experience . in C Stephanidis, D Harris, W-C Li, D D Schmorrow, C M Fidopiastis, M Antona, Q Gao, J Zhou, P Zaphiris, A Ioannou, A Ioannou, R A Sottilare, J Schwarz & M Rauterberg (eds), HCI International 2021 - Late Breaking Papers: Cognition, Inclusion, Learning, and Culture : 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings . Lecture Notes in Computer Science, vol. 13096, Springer, pp. 147-167, International Conference on Human-Computer Interaction, Virtual, Online, 24/07/2021 . https://doi.org/10.1007/978-3-030-90328-2_10