VIMES: A Wearable Memory Assistance System for Automatic Information Retrieval
Access rights
openAccess
URL
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Date
2020-10
Major/Subject
Mcode
Degree programme
Language
en
Pages
10
Series
Proceedings of the 28th ACM International Conference on Multimedia, pp. 3191-3200
Abstract
The advancement of artificial intelligence and wearable computing triggers the radical innovation of cognitive applications. In this work, we propose VIMES, an augmented reality-based memory assistance system that helps recall declarative memory, such as whom the user meets and what they chat. Through a collaborative method with 20 participants, we design VIMES, a system that runs on smartglasses, takes the first-person audio and video as input, and extracts personal profiles and event information to display on the embedded display or a smartphone. We perform an extensive evaluation with 50 participants to show the effectiveness of VIMES for memory recall. VIMES outperforms (90% memory accuracy) other traditional methods such as self-recall (34%) while offering the best memory experience (Vividness, Coherence, and Visual Perspective all score over 4/5). The user study results show that most participants find VIMES useful (3.75/5) and easy to use (3.46/5).Description
Keywords
information retrieval, wearable computing, memory assistance system
Other note
Citation
Bermejo, C, Braud, T, Yang, J, Mirjafari, S, Shi, B, Xiao, Y & Hui, P 2020, VIMES: A Wearable Memory Assistance System for Automatic Information Retrieval . in Proceedings of the 28th ACM International Conference on Multimedia . ACM, pp. 3191-3200, ACM International Conference on Multimedia, Virtual, Online, 12/10/2020 . https://doi.org/10.1145/3394171.3413663