Author Login Chief Editor Login Reviewer Login Editor Login Remote Office

Computer Engineering

   

A Physical World Mapping Discrepancy Engine Based on Multi-Source Sensor Data

  

  • Published:2025-10-27

多源传感数据的物理世界映射差异引擎

Abstract: Quantifying the discrepancy between different sensor perception algorithms' mapping of the physical world and identifying boundary data is a key challenge in automating the extraction of high-value boundary data. This paper proposes a discrepancy engine based on multi-source sensor data for the autonomous discovery of boundary data. The engine consists of two main modules: the discrepancy cognition module and the discrepancy rate calculation module. In the discrepancy cognition module, a discrepancy rate was defined, and an association model linking the discrepancy rate with perception mapping discrepancies was established. The average discrepancy rate of a dataset was used as the baseline discrepancy rate to quantify mapping discrepancies and identify boundary data. In the experiments, the baseline discrepancy rates of LiDAR, millimeter-wave radar, and vision-based perception algorithms were calculated as 0.17, 0.23, and 0.19, respectively. In the discrepancy rate calculation module, a 2D pixel distance matching strategy combining the chi-square distribution and Welsh loss was used to match camera-detected objects with those detected by LiDAR, millimeter-wave radar, and other cameras. Compagred to a fusion algorithm that used only a 3D distance matching strategy, the proposed approach achieved discrepancy rates of 0.16 and 0.14 relative to the ground truth on the test dataset, demonstrating that the improved matching strategy significantly enhanced the accuracy of the fusion algorithm. The results indicate that the discrepancy engine achieves average recognition accuracies of 0.85, 0.74, and 0.82 for the boundary data of LiDAR, millimeter-wave radar, and vision-based perception algorithms. Validation in real-world road scenarios, including straight urban roads, simple intersections, and complex intersections, confirms the engine's effectiveness in identifying perception boundary data.

摘要: 如何量化不同传感器感知算法对物理世界映射的差异并识别边界数据,是实现高价值边界数据自动化提取的关键问题。本文提出了一种基于多源传感数据的差异引擎,用于边界数据的自动发现。该引擎由差异认知模块和差异率计算模块组成。在差异认知模块中,我们定义差异率来构建差异率与感知映射差异的关联模型,并将一个数据集的平均差异率作为基准差异率以量化感知映射差异,辨别边界数据。其中,由测试集计算的激光雷达、毫米波雷达、视觉感知算法的基准差异率分别为0.17、0.23、0.19。在差异率计算模块中,针对相机目标与激光雷达目标、毫米波雷达目标、相机目标的匹配,使用了基于卡方分布与威尔士损失的2D像素距离匹配策略。通过与只使用3D距离匹配策略的融合算法比较,二者与真值在测试集的差异率分别为0.16和0.14,证明了改进匹配策略能有效提高融合算法匹配的准确率。最后,在真实道路场景验证差异引擎的有效性,其场景为城市直行道路、简单路口以及复杂路口3个基本场景组成的75个场景。结果表明,差异引擎对激光雷达、毫米波雷达和视觉感知算法的边界数据识别平均准确率分别达到了 0.85、 0.74 和 0.82,证明了该引擎在感知边界数据识别中的有效性。