As an instance-level recognition problem, the key to effective vehicle re-identification (Re-ID) is to carefully reason the discriminative and viewpoint-invariant features of vehicle parts at high-level and low-level semantics. However, learning part-based features requires a laborious human annotation of some factors as attributes. To address this issue, we propose a region-aware multi-resolution (RAMR) Re-ID framework that can extract features from a series of local regions without extra manual annotations. Technically, the proposed method improves the discriminative ability of the local features through parallel high-to-low resolution convolutions. We also introduce a position attention module to focus on the prominent regions that can provide effective information. Given that the vehicle Re-ID performance can be affected by background clutters, we use the image obtained through foreground segmentation to extract local features. Results show that using original and foreground images can enhance the Re-ID performance compared with using either the original or foreground images alone. In other words, the original and foreground images complement each other in the vehicle Re-ID process. Finally, we aggregate the global appearance and local features to improve the system performance. Extensive experiments on two publicly available vehicle Re-ID datasets, namely, VeRi-776 and VehicleID, are conducted to validate the effectiveness of each proposed strategy. The findings indicate that the RAMR model achieves significant improvement in comparison with other state-of-the-art methods. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 1 scholarly publication.
Image segmentation
Data modeling
Feature extraction
Performance modeling
Convolution
Cameras
RGB color model