Post-Attention Modulator for Dense Video Captioning

Loading...
Thumbnail Image
Access rights
openAccess
Journal Title
Journal ISSN
Volume Title
Conference article in proceedings
Date
2022
Major/Subject
Mcode
Degree programme
Language
en
Pages
1536-1542
Series
Proceedings of the 26th International Conference on Pattern Recognition (ICPR), International Conference on Pattern Recognition
Abstract
Dense video captioning (VC) aims at generating a paragraph-long description for events in video segments. Borrowing from the success in language modeling, Transformer-based models for VC have been shown effective also in modeling cross-domain video-text representations with cross-attention (Xatt). Despite Xatt’s effectiveness, the queries and outputs of attention, which are from different domains, tend to be weakly related. In this paper, we argue that the weak relatedness, or domain discrepancy, could impede a model from learning meaningful cross-domain representations. Hence, we propose a simple yet effective Post-Attention Modulator (PAM) that post-processes Xatt’s outputs to narrow the discrepancy. Specifically, PAM modulates and enhances the average similarity over Xatt’s queries and outputs. The modulated similarities are then utilized as a weighting basis to interpolate PAM’s outputs. In our experiments, PAM was applied to two strong VC baselines, VTransformer and MART, with two different video features on the well-known VC benchmark datasets ActivityNet Captions and YouCookII. According to the results, the proposed PAM brings consistent improvements in, e.g., CIDEr-D at most to 14.5%, as well as other metrics, BLEU and METEOR, considered.
Description
Keywords
Other note
Citation
Guo , Z , Wang , T-J J & Laaksonen , J 2022 , Post-Attention Modulator for Dense Video Captioning . in Proceedings of the 26th International Conference on Pattern Recognition (ICPR) . International Conference on Pattern Recognition , IEEE , pp. 1536-1542 , International Conference on Pattern Recognition , Montreal , Quebec , Canada , 21/08/2022 . https://doi.org/10.1109/ICPR56361.2022.9956260