Pixel and Feature Transfer Fusion for Unsupervised Cross-Dataset Person Reidentification
Loading...
Access rights
openAccess
acceptedVersion
URL
Journal Title
Journal ISSN
Volume Title
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Date
2025-03-01
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
13
Series
IEEE Transactions on Neural Networks and Learning Systems, Volume 36, issue 3, pp. 4220-4232
Abstract
Recently, unsupervised cross-dataset person reidentification (Re-ID) has attracted more and more attention, which aims to transfer knowledge of a labeled source domain to an unlabeled target domain. There are two common frameworks: one is pixel-alignment of transferring low-level knowledge, and the other is feature-alignment of transferring high-level knowledge. In this article, we propose a novel recurrent autoencoder (RAE) framework to unify these two kinds of methods and inherit their merits. Specifically, the proposed RAE includes three modules, i.e., a feature-transfer (FT) module, a pixel-transfer (PT) module, and a fusion module. The FT module utilizes an encoder to map source and target images to a shared feature space. In the space, not only features are identity-discriminative but also the gap between source and target features is reduced. The PT module takes a decoder to reconstruct original images with its features. Here, we hope that the images reconstructed from target features are in the source style. Thus, the low-level knowledge can be propagated to the target domain. After transferring both high- and low-level knowledge with the two proposed modules above, we design another bilinear pooling layer to fuse both kinds of knowledge. Extensive experiments on Market-1501, DukeMTMC-ReID, and MSMT17 datasets show that our method significantly outperforms either pixel-alignment or feature-alignment Re-ID methods and achieves new state-of-the-art results.Description
| openaire: EC/H2020/101016775/EU//INTERVENE
Keywords
Adaptation models, Cameras, Data models, Feature fusion, Image reconstruction, Lighting, Measurement, Scalability, generate adversarial nets, person reidentification (Re-ID), unsupervised learning.
Other note
Citation
Yang, Y, Wang, G, Tiwari, P, Pandey, H M & Lei, Z 2025, 'Pixel and Feature Transfer Fusion for Unsupervised Cross-Dataset Person Reidentification', IEEE Transactions on Neural Networks and Learning Systems, vol. 36, no. 3, pp. 4220-4232. https://doi.org/10.1109/TNNLS.2021.3128269