Please use this identifier to cite or link to this item:
Title: Exploiting feature representations through similarity learning, post-ranking and ranking aggregation for person re-identification
Author: Silveira Jacques Junior, Julio Cezar
Baró Solé, Xavier  
Escalera Guerrero, Sergio
Others: Universitat Autònoma de Barcelona
Universitat de Barcelona
Universitat Oberta de Catalunya (UOC)
Keywords: person re-identification
similarity learning
feature fusion
ranking aggregation
Issue Date: Apr-2018
Publisher: Image and Vision Computing
Citation: Jacques Junior, J.C.S, Baró, X. & Escalera, S. (2018). Exploiting feature representations through similarity learning, post-ranking and ranking aggregation for person re-identification. Image and Vision Computing, 79(), 76-85. doi: 10.1016/j.imavis.2018.08.001
Also see:
Abstract: Person re-identification has received special attention by the human analysis community in the last few years. To address the challenges in this field, many researchers have proposed different strategies, which basically exploit either cross-view invariant features or cross-view robust metrics. In this work, we propose to exploit a post-ranking approach and combine different feature representations through ranking aggregation. Spatial information, which potentially benefits the person matching, is represented using a 2D body model, from which color and texture information are extracted and combined. We also consider background/foreground information, automatically extracted via Deep Decompositional Network, and the usage of Convolutional Neural Network (CNN) features. To describe the matching between images we use the polynomial feature map, also taking into account local and global information. The Discriminant Context Information Analysis based post-ranking approach is used to improve initial ranking lists. Finally, the Stuart ranking aggregation method is employed to combine complementary ranking lists obtained from different feature representations. Experimental results demonstrated that we improve the state-of-the-art on VIPeR and PRID450s datasets, achieving 67.21% and 75.64% on top-1 rank recognition rate, respectively, as well as obtaining competitive results on CUHK01 dataset.
Language: English
ISSN: 0262-8856MIAR
Appears in Collections:Articles

Files in This Item:
File SizeFormat 
exploitingfeature.pdf1.31 MBAdobe PDFView/Open

Items in repository are protected by copyright, with all rights reserved, unless otherwise indicated.