作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2022, Vol. 48 ›› Issue (3): 100-106. doi: 10.19678/j.issn.1000-3428.0060670

• 人工智能与模式识别 • 上一篇    下一篇

基于目标检测的机器人手眼标定方法

钟宇1, 张静1,2, 张华1, 肖贤鹏1   

  1. 1. 西南科技大学 信息工程学院, 四川绵阳 621000;
    2. 中国科学技术大学 信息科学技术学院, 合肥 230026
  • 收稿日期:2021-01-22 修回日期:2021-03-15 发布日期:2021-03-30
  • 作者简介:钟宇(1997-),男,硕士研究生,主研方向为机器人视觉;张静(通信作者),讲师、博士;张华,教授、博士;肖贤鹏,硕士研究生。
  • 基金资助:
    国家“十三五”核能开发科研项目(20161295);四川省教育厅科研项目(18ZA0492);四川省科技成果转化示范项目(18ZHSF0071)。

Hand-Eye Calibration Method Based on Object Detection for Robots

ZHONG Yu1, ZHANG Jing1,2, ZHANG Hua1, XIAO Xianpeng1   

  1. 1. School of Information Engineering, Southwest University of Science and Technology, Mianyang, Sichuan 621000, China;
    2. School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China
  • Received:2021-01-22 Revised:2021-03-15 Published:2021-03-30

摘要: 智能协作机器人依赖视觉系统感知未知环境中的动态工作空间定位目标,实现机械臂对目标对象的自主抓取回收作业。RGB-D相机可采集场景中的彩色图和深度图,获取视野内任意目标三维点云,辅助智能协作机器人感知周围环境。为获取抓取机器人与RGB-D相机坐标系之间的转换关系,提出基于yolov3目标检测神经网络的机器人手眼标定方法。将3D打印球作为标靶球夹持在机械手末端,使用改进的yolov3目标检测神经网络实时定位标定球的球心,计算机械手末端中心在相机坐标系下的3D位置,同时运用奇异值分解方法求解机器人与相机坐标系转换矩阵的最小二乘解。在6自由度UR5机械臂和Intel RealSense D415深度相机上的实验结果表明,该标定方法无需辅助设备,转换后的空间点位置误差在2 mm以内,能较好满足一般视觉伺服智能机器人的抓取作业要求。

关键词: 手眼标定, 机器人, RGB-D相机, 目标检测, 奇异值分解

Abstract: The intelligent collaborative robot relies on the vision system to perceive the dynamic workspace positioning target in the unknown environment, and realize the autonomous grasping and recycling operation of the target object by the robotic arm.The RGB-D camera can collect the color map and depth map in the scene, obtain the 3D point cloud of any target in the field of view, and assist the intelligent collaborative robot to perceive the surrounding environment.To obtain the conversion relationship between the coordinate system of the RGB-D camera and the grasping robot, a hand-eye calibration method using Yolov3-based object detection is proposed for robots.In this method, the 3D printed ball is clamped as the target ball at the end of the manipulator, and the improved Yolov3 network is used to locate the center of the calibration ball in real time.Then the 3D position of the end center of the manipulator in the camera coordinate system is calculated.The Singular Value Decomposition(SVD) method is also used to solve the least square solution of the conversion matrix between the robot coordinate system and the camera coordinate system.The 6-DOF UR5 robotic arm and an Intel RealSense D415 depth camera are used to perform hand-eye calibration experiments.The experimental results show that this calibration method does not require complicated auxiliary equipment, and the position error of the converted space points is limited within 2 mm.It can better meet the grasping operation requirements of general visual servo intelligent robots.

Key words: hand-eye calibration, robot, RGB-D camera, object detection, Singular Value Decomposition(SVD)

中图分类号: