Paying Attention to Descriptions Generated by Image Captioning Models

Thumbnail Image

Access rights

openAccess
publishedVersion

URL

Journal Title

Journal ISSN

Volume Title

A4 Artikkeli konferenssijulkaisussa

Date

2017

Major/Subject

Mcode

Degree programme

Language

en

Pages

Series

Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017, pp. 2506-2515, IEEE International Conference on Computer Vision

Abstract

To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene. In this paper, we study the agreement between bottom-up saliency-based visual attention and object referrals in scene description constructs. We investigate the properties of human-written descriptions and machine-generated ones. We then propose a saliency-boosted image captioning model in order to investigate benefits from low-level cues in language models. We learn that (1) humans mention more salient objects earlier than less salient ones in their descriptions, (2) the better a captioning model performs, the better attention agreement it has with human descriptions, (3) the proposed saliencyboosted model, compared to its baseline form, does not improve significantly on the MS COCO database, indicating explicit bottom-up boosting does not help when the task is well learnt and tuned on a data, (4) a better generalization is, however, observed for the saliency-boosted model on unseen data.

Description

Keywords

Visualization, Measurement, Data models, Grammar, Computational modeling, Computer science

Other note

Citation

Rezazadegan Tavakoli, H, Shetty, R, Borji, A & Laaksonen, J 2017, Paying Attention to Descriptions Generated by Image Captioning Models . in Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017 ., 8237534, IEEE International Conference on Computer Vision, IEEE, pp. 2506-2515, IEEE International Conference on Computer Vision, Venice, Italy, 22/10/2017 . https://doi.org/10.1109/ICCV.2017.272