[1]HINTERSTOISSER S, HOLZER S, CAGNIART C, et al. Multimodal templates for real time detection of texture less objects in heavily cluttered scenes[C]//Proceedings of the IEEE Conference on Computer Vision. Washington D. C., USA: IEEE Press, 2011: 858-865.
[2]RUSU R B, BLODOW N, BEETZ M. Fast point feature histograms for 3d registration[C]//Proceedings of the IEEE Conference on Robotics and Automation. Washington D. C., USA: IEEE Press, 2009: 3212-3217.
[3]DROST B, ULRICH, NAVAB N, et al. Model globally, match locally: efficient and robust 3d object recognition [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2010: 998-1005.
[4]LOWE D G. Object recognition from local scale-invariant features[C]//Proceedings of the IEEE Conference on Computer Vision. Washington D. C., USA: IEEE Press, 1999: 1150-1157.
[5]BAY H, ESS A, TUYTELAARS T, et al. Speeded-up robust features[J]. Computer Vision and Image Understanding, 2008, 110(3): 346-59.
[6]LEPETIT V, MORENO-NOGUER F, FUA P. EPnP: an accurate o(n) solution to the pnp problem[J]. International Journal of Computer Vision, 2009, 81(2): 155-166.
[7]SUN J M, WANG Z H, ZHANG S Y, et al. Onepose: one-shot object pose estimation without cad models[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2022: 6825-6834.
[8]LIU Y, WEN Y L, PENG S D, et al. Gen6d: generalizable model-free 6-dof object pose estimation from rgb images [C]//Proceedings of the European Conference on Computer Vision. Cham, Germany: Springer Press, 2022: 298-315.
[9]XIANG Y, SCHMIDT T, NARAYANAN V, et al. PoseCNN:a convolutional neural network for 6d object pose estimation in cluttered scenes[C]//Proceedings of the RSS Conference on Computer Vision and Pattern Recognition. New York, USA: Cornell University Press, 2017.
[10]KEHL W, MANHARDT F, TOMBARI F, et al. SSD-6d: making rgb-based 3d detection and 6d pose estimation great again[C]//Proceedings of the IEEE Conference on Computer Vision. Washington D. C., USA: IEEE Press, 2017: 1521-1529.
[11]TRABELSI A, CHAABANE M, BLANCHARD N, et al. A pose proposal and refinement network for better 6d object pose estimation[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. Washington D. C., USA: IEEE Press, 2021: 2382-2391.
[12]WANG C, XU D F, ZHU Y K, et al. Densefusion: 6d object pose estimation by iterative dense fusion[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2019:3343-3352.
[13]QI C R, SU H, MO K C, et al. PointNet: deep learning on point sets for 3d classification and segmentation [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2017: 652-660.
[14]WANG H, SRIDHAR S, HUANG J, et al. Normalized object coordinate spacefor category-level 6d object pose and size estimation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2019: 2642-2651.
[15]LIU Y, PENG S D, LIU L L, et al. Neural rays for occlusion aware image based rendering[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2022: 7824-7833.
[16]MILDENHALL B, SRINIVASAN P P, TANCIK M, et al. Nerf: representingscenes as neural radiance fields for view synthesis[J]. Communications of the ACM, 2021, 65(1): 99-106.
[17]LIN J, LIU L, LU D, et al. Sam-6d: segment any model for zero-shot 6D object pose estimation[EB/OL]. (2023-11-27)[2025-10-01]. https://arxiv.org/pdf/2311.15707.pdf.
[18] 徐浙君, 陈善雄. 基于深度学习的弱纹理图像关键目标点识别定位方法[J]. 计算机测量与控制, 2022,30(02): 186-191.
XU Z J, CHEN S X. A method for recognition and localization of key target points in weak-texture images based on deep learning[J]. Computer Measurement & Control, 2022, 30(02): 186-191.
[19]ZENG A, YU K T, SONG S, et al. Multi-view self-supervised deep learning for 6d pose estimation in the amazon picking challengeg[C]//Proceedings of the IEEE Conference on Robotics and Automation. Washington D. C., USA: IEEE Press, 2017: 1383-1386.
[20]JIN M, LI J, ZHANG L. Dope++6d pose estimation algorithm for weakly textured objects based on deep neural networks[J]. Plos one, 2022, 17(6):e0269175.
[21]李耀, 程良伦, 王涛. 基于碎片模型的弱纹理物体位姿估计[J]. 计算机科学与应用, 2022, 12(1): 252-261.
LI Y, CHENG L L, WANG T. Pose estimation of weak-texture objects based on fragment model. Computer Science and Applications, 2022, 12(1): 252-261.
[22]万琴, 宁顺兴, 钟杭, 等. 面向弱纹理工件的6D位姿估计与机械臂抓取方法[J]. 控制理论与应用, 2025, 42(7): 1443-1452.
WAN Q, NING S X, ZHONG H, et al.6D pose estimation and robotic arm grasping method for weak-texture workpieces[J]. Control Theory & Applications, 2025, 42(7): 1443-1452.
[23]RAD M, LEPETIT V. Bb8: a scalable, accurate, robust to partial occlusion method for predicting the 3d poses of challenging objects without using depth[C]//Proceedings of the IEEE international conference on computer vision. Washington D. C., USA: IEEE Press, 2017: 3828-3836.
[24]TEKIN B, SINHA S N, FUA P. Real-time seamless single shot 6d object pose prediction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2018: 292-301.
[25]PENG S D, ZHOU X W, LIU Y, et al. PVNet: pixel-wise voting network for 6dof object pose estimation[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 2022, 44(6): 3212- 3223.
[26]PARK K, PATTEN T, VINCZE M. Pix2Pose:pixel-wise coordinate regression of objects for 6D pose estimation [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2019: 7667-7676.
[27]ZAKHAROV S, SHUGUROV I, ILIC S. Dpod: 6d pose object detector and refiner[C]//Proceedings of the IEEE/CVF international conference on computer vision. Washington D. C., USA: IEEE Press, 2019: 1941-1950.
[28]HODAN T, BARATH D, MATAS J. Epos: estimating 6d pose of objects with symmetries[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. Washington D. C., USA: IEEE Press, 2020: 11703-11712.
[29]WANG G, MANHARDT F, TOMBARI F, et al. Gdr-net: geometry-guided direct regression network for monocular 6d object pose estimation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2021: 16611-16621.
[30]CHEN H, WANG P, WANG F, et al. Epro-pnp: generalized end-to-end probabilistic perspective-n-points for monocular object pose estimation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2022: 2781-2790.
[31]CAO T, ZHANG W, FU Y, et al. Dgecn++: a depth-guided edge convolutional network for end-to-end 6d pose estimation via attention mechanism[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 34(6): 4214-4228.
[32]LEPETIT V, MORENO N, FUA P. Epnp: an accurate o(n) solution to the pnp problem[J]. International Journal of Computer Vision, 2009, 81(2): 155-166.
[33]XIE S N, GIRSHICK R, DOLLAR P, et al. Aggregated residual transformations for deep neural networks[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2017: 1492-1500.
[34]HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2016: 770-778.
[35]SZEGEDY C, LIU W, JIA Y Q, et al. Going deeper with convolutions[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2015: 1-9.
[36]HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2017: 2261-2269.
[37]WOO S, PARK J, LEE J Y, et al. Cbam: convolutional block attention module[C]//Proceedings of the European conference on computer vision. Munich, Germany: Springer Press, 2018: 3-19.
[38]TREMBLAY J, TO T, SUNDARALINGAM B, et al. Deep object pose estimation for semantic robotic grasping of house-hold object[EB/OL]. (2018-09-27)[2025-10-01]. https://arxiv.org/pdf/1809.10790.pdf.
[39]WANG Q L, WU B G, ZHU P F, et al. ECA-Net: efficient channel attention for deep convolutional neural networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2020: 11531-11539.
[40]IWASE S, LIU X, KHIRODKAR R, et al. Repose: fast 6d object pose refinement via deep texture rendering[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Washington D. C., USA: IEEE Press, 2021: 3303-3312.
[41]PARK J, CHO N I. Dprost: dynamic projective spatial transformer network for 6d pose estimation[C]// Proceedings of the European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 363-379.
[42]SU Y, SALEH M, FETZER T, et al. Zebrapose: Coarse to fine surface encoding for 6dof object pose estimation[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2022: 6738-6748.
[43]LIANG J Y, ZHANG H B, LEI Q, et al. Dual branch pnp based network for monocular 6D pose estimation[J]. Intelligent Automation & Soft Computing, 2023, 36(3).
[44]LIAN R, LING H. Checkerpose: progressive dense keypoint localization for object pose estimation with graph neural network[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Washington D. C., USA: IEEE Press, 2023: 14022-14033.
[45]CAO T, ZHANG W, FU Y, et al. Dgecn++: a depth-guided edge convolutional network for end-to-end 6d pose estimation via attention mechanism[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 34(6): 4214-4228.
[46]LI Y, MAO Y, BALA R, et al. Mrc-net: 6-dof pose estimation with multiscale residual correlation[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2024: 10476-10486.
[47] HOANG D C, TAN P X, DUONG T H A, et al. Self-supervised object pose estimation with multi-task learning[J/OL]. IEEE Transactions on Cognitive and Developmental Systems:[2025-11-07]. https://ieee-xplore. ieee.org/abstract/document/11008753.
[48]LIU M, LI S, CHHATKULI A, et al. One2Any: one reference 6D pose estimation for any object[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2025: 6457-6467.
[49]CASTRO P, KIM T K. Crt-6d: fast 6d object pose estimation with cascaded refinement transformers[C]// Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. Washington D. C., USA: IEEE Press, 2023: 5746-5755.
[50]HAI Y, SONG R, LI J, et al. Shape-constraint recurrent flow for 6d object pose estimation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. Washington D. C., USA: IEEE Press, 2023: 4831-4840.
[51]LUO S, LI J, XIE Y, et al. Monocular object pose estimation using specific object imaging techniques[C]// Proceedings of the International Conference on Electronic Information Engineering and Computer. Washington D. C., USA: IEEE Press, 2024: 1067-1071.
[52]SAROWAR M S, KIM S. Vlm6d: vlm based 6dof pose estimation based on rgbd images[J/OL]. Computer Vision and Pattern Recognition:[2025-11-07]. https://doi.org/10.4 8550/arXiv.2511.00120
[53]JIN Y, PRASAD V, JAUHRI S, et al. 6Dope-gs: online 6d object pose estimation using gaussian splatting[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Washington D. C., USA: IEEE Press, 2025: 8032-8043.
|