针对机械臂对尺寸变换、形状各异、任意位姿的未知物体抓取，提出了一种基于多层级特征的单阶段 抓取位姿检测算法，将物体抓取位姿检测问题当成抓取角度分类和抓取位置回归进行处理，对抓取角度和抓 取位置执行单次预测。利用深度数据替换RGB图像的B通道，生成RGD图像，采用轻量型特征提取器VGG16作为主干网络。针对VGG16特征提取能力较弱的问题，引用Inception模块设计了一种特征提取能力更强的网络模 型。然后，在不同层级的特征图上，利用先验框的方法进行抓取位置采样，通过浅层特征与深层特征的混合 使用提高模型对尺寸多变物体的适应能力。最后，输出置信度最高的检测结果作为最优抓取位姿。在image-wise数据集和object-wise数据集上，本文算法的评估结果分别为95.71%和94.01%，检测速度为58.8FPS，与现有方法相比在精度和速度上均有明显的提升.
For the manipulator to grasp the novel objects with variable sizes, different shapes, and arbitrary poses,a single-stage grasp pose detection algorithm based on multi-level features is designed by taking the grasp position detection problem of object as the grasp angle classification and grasp position regression processing, and performing a single prediction for grasp angle and grasp position. The RGD image is generated by replacing the blue channel of RGB image with depth data, and the lightweight feature extractor VGG16 is used as the backbone network. For the problem that the feature extraction ability of VGG16 is weak, the Inception module is used to design a network model with stronger feature extraction capability. Then, grasp position is sampled by the method of priori box on the feature map of different levels,and the adaptability of the model to the objects with variable sizes is improved through the combination of shallow features and deep features. Finally, the detection result with the highest confidence is output as the optimal grasp pose. The evaluation results of the proposed algorithm on the image-wise dataset and the object-wise dataset are respectively 95.71% and 94.01%, and the detection speed is 58.8FPS, and the accuracy and speed are improved compared with the current methods.