TY - JOUR
T1 - Robust auxiliary modality is beneficial for video-based cloth-changing person re-identification
AU - Chen, Youming
AU - Tuo, Ting
AU - Guo, Lijun
AU - Zhang, Rong
AU - Wang, Yirui
AU - Gao, Shangce
N1 - Publisher Copyright:
© 2025 Elsevier B.V.
PY - 2025/2
Y1 - 2025/2
N2 - The core of video-based cloth-changing person re-identification is the extraction of cloth-irrelevant features, such as body shape, face, and gait. Most current methods rely on auxiliary modalities to help the model focus on these features. Although these modalities can resist the interference of clothing appearance, they are not robust against cloth-changing, which affects model recognition. The joint point information of pedestrians was considered to better resist the impact of cloth-changing; however, it contained limited pedestrian discrimination information. In contrast, the silhouettes had rich pedestrian discrimination information and could resist interference from clothing appearance but were vulnerable to cloth-changing. Therefore, we combined these two modalities to construct a more robust modality that minimized the impact of clothing on the model. We designed different usage methods for the temporal and spatial aspects based on the characteristics of the fusion modality to enhance the model for extracting cloth-irrelevant features. Specifically, at the spatial level, we developed a guiding method retaining fine-grained, cloth-irrelevant features while using fused features to reduce the focus on cloth-relevant features in the original image. At the temporal level, we designed a fusion method that combined action features from the silhouette and joint point sequences, resulting in more robust action features for cloth-changing pedestrians. Experiments on two video-based cloth-changing datasets, CCPG-D and CCVID, indicated that our proposed model outperformed existing state-of-the-art methods. Additionally, tests on the gait dataset CASIA-B demonstrated that our model achieved optimal average precision.
AB - The core of video-based cloth-changing person re-identification is the extraction of cloth-irrelevant features, such as body shape, face, and gait. Most current methods rely on auxiliary modalities to help the model focus on these features. Although these modalities can resist the interference of clothing appearance, they are not robust against cloth-changing, which affects model recognition. The joint point information of pedestrians was considered to better resist the impact of cloth-changing; however, it contained limited pedestrian discrimination information. In contrast, the silhouettes had rich pedestrian discrimination information and could resist interference from clothing appearance but were vulnerable to cloth-changing. Therefore, we combined these two modalities to construct a more robust modality that minimized the impact of clothing on the model. We designed different usage methods for the temporal and spatial aspects based on the characteristics of the fusion modality to enhance the model for extracting cloth-irrelevant features. Specifically, at the spatial level, we developed a guiding method retaining fine-grained, cloth-irrelevant features while using fused features to reduce the focus on cloth-relevant features in the original image. At the temporal level, we designed a fusion method that combined action features from the silhouette and joint point sequences, resulting in more robust action features for cloth-changing pedestrians. Experiments on two video-based cloth-changing datasets, CCPG-D and CCVID, indicated that our proposed model outperformed existing state-of-the-art methods. Additionally, tests on the gait dataset CASIA-B demonstrated that our model achieved optimal average precision.
KW - Gait recognition
KW - Multi-modalities feature fusion
KW - Video-based cloth-changing person re-identification
UR - http://www.scopus.com/inward/record.url?scp=85214444903&partnerID=8YFLogxK
U2 - 10.1016/j.imavis.2024.105400
DO - 10.1016/j.imavis.2024.105400
M3 - 総説
AN - SCOPUS:85214444903
SN - 0262-8856
VL - 154
JO - Image and Vision Computing
JF - Image and Vision Computing
M1 - 105400
ER -