Hebei University of Technology
域泛化的行人再识别能够在源数据集训练并在目标数据集进行测试,具有更广泛的实际应用意义。现有域泛化模型往往由于专注解决光照和色彩变化问题而忽略对细节信息的有效利用,导致识别率较低。为了解决上述问题,本文提出了一种融合注意力机制的域泛化行人再识别模型。该模型首先通过叠加卷积层的瓶颈层(Bottleneck Layer)设计提取出包含不同视野域的多尺度特征,并利用特征融合注意力模块对多尺度特征进行加权动态融合；然后通过多层次注意力模块挖掘细节特征的语义信息；最后将包含丰富语义信息的细节特征输入到判别器进行行人再识别。此外,设计了风格正则化模块(Style nomalization module)用于降低不同数据集图像明暗变化对模型泛化能力的影响。在Market-1501和DukeMTMC-reID数据集上进行的对比和消融实验,证明了本文方法的有效性。
Domain generalization person re-identification models can be trained in the source dataset and tested in the target dataset, which has wider practical application significance. The existing domain generalization models tend to focus on solving the problem of illumination and color change, and ignoring the effective utilization of detailed information, which leads to the low recognition rate. In this paper, a domain generalized person re-identification model is proposed which is based on the attention mechanism. Firstly, the model extracts multi-scale features with different visual fields through the design of the bottleneck layer superimposed on the convolutional layer, and a feature fusion attention module is used to perform dynamic fusion of multi-scale features and assign weights. Then the semantic information of refinement features is mined by the multilevel attention module. Finally, the feature containing rich semantic information is input to the discriminator for person re-identification. In addition, a style regularization module is designed to reduce the influence of image light and shade changes on model generalization ability. Abundant comparison and ablation experiments conducted on market-1501 and Dukemtmc-Reid datasets demonstrate the effectiveness of the proposed method.