[1] Qi C R, Litany O, He K, et al. Deep hough voting for 3d
object detection in point clouds[C]//proceedings of the
IEEE/CVF International Conference on Computer Vision. 2019:
9277-9286.
[2] 苏鸣方,胡立坤,黄润辉.基于上下文注意力的室外点云语
义分割方法[J].计算机工程,2023,49(03):248-256
SU M F, HU L K, HUANG R H, Semantic Segmentation
Method for Outdoor Point Clouds Based on Contextual
Attention[J]. Computer Engineering, 2023,49(03):248-256.
[3] 高庆吉,李天昊,邢志伟等.基于区块特征融合的点云语义
分割方法[J].计算机工程,2022,48(09):37-44+54.
GAO Q J, LI T H, XING Z W, et al. Point Cloud Semantic
Segmentation Method Based on Block Feature Fusion[J].
Computer Engineering,2022,48(09):37-44+54.
[4] 宁小娟,巩亮,张金磊.基于激光点云的道路可通行区域检
测方法[J].计算机工程,2022,48(04):22-29.
NING X J, GONG L, ZHANG J L. Detection Method of
Passable Road Areas Based on Laser Point Clouds[J]. Computer
Engineering,2022,48(04):22-29.
[5] Garnelo M, Czarnecki W M. Exploring the Space of
Key-Value-Query Models with Intention[J]. arXiv preprint
arXiv:2305.10203, 2023.
[6] Zhao S, Qi X. Prototypical VoteNet for Few-Shot 3D Point
Cloud Object Detection[J]. Advances in Neural Information
Processing Systems, 2022, 35: 13838-13851.
[7] Qi C R, Yi L, Su H, et al. Pointnet++: Deep hierarchical
feature learning on point sets in a metric space[J]. Advances in
neural information processing systems, 2017, 30.
[8] Qi C R, Su H, Mo K, et al. Pointnet: Deep learning on point
sets for 3d classification and segmentation[C]// Proceedings of
the IEEE conference on computer vision and pattern recognition.
2017: 652-660.
[9] Zhou Y, Tuzel O. Voxelnet: End-to-end learning for point
cloud based 3d object detection[C]//Proceedings of the IEEEconference on computer vision and pattern recognition. 2018:
4490-4499.
[10] 仇真,奚雪峰,崔志明等.基于多分辨率自蒸馏网络的小样
本图像分类[J].计算机工程,2022,48(12):232-240.
QIU Z, XI X F, CUI Z M, Few-Shot Image Classification Based
on Multi-Resolution Self-Distillation Network[J]. Computer
Engineering,2022,48(12):232-240.
[11] Wang Y X, Ramanan D, Hebert M. Meta-learning to detect
rare objects[C]//Proceedings of the IEEE/CVF International
Conference on Computer Vision. 2019: 9925-9934.
[12] Kang B, Liu Z, Wang X, et al. Few-shot object detection
via feature reweighting[C]//Proceedings of the IEEE/CVF
International Conference on Computer Vision. 2019: 8420-8429.
[13] Finn C, Abbeel P, Levine S. Model-agnostic meta-learning
for fast adaptation of deep networks[C]// International
conference on machine learning. PMLR, 2017: 1126-1135.
[14] Schuhmann C, Beaumont R, Vencu R, et al. Laion-5b: An
open large-scale dataset for training next generation image-text
models[J]. Advances in Neural Information Processing Systems,
2022, 35: 25278-25294.
[15] Radford A, Kim J W, Hallacy C, et al. Learning
transferable visual models from natural language
supervision[C]//International conference on machine learning.
PMLR, 2021: 8748-8763.
[16] Song H, Dong L, Zhang W N, et al. Clip models are
few-shot learners: Empirical studies on vqa and visual
entailment[J]. arXiv preprint arXiv:2203.07190, 2022.
[17] Snell J, Swersky K, Zemel R. Prototypical networks for
few-shot learning[J]. Advances in neural information processing
systems, 2017, 30.
[18] Cao Y, Wang J, Jin Y, et al. Few-shot object detection via
association and discrimination[J]. Advances in neural
information processing systems, 2021, 34: 16570-16581.
[19] Wang X, Huang T E, Darrell T, et al. Frustratingly simple
few-shot object detection[J]. arXiv preprint arXiv:2003.06957,
2020.
[20] Wu J, Liu S, Huang D, et al. Multi-scale positive sample
refinement for few-shot object detection[C]//Computer
Vision–ECCV 2020: 16th European Conference, Glasgow, UK,
August 23–28, 2020, Proceedings, Part XVI 16. Springer
International Publishing, 2020: 456-472.
[21] Sun B, Li B, Cai S, et al. Fsce: Few-shot object detection
via contrastive proposal encoding[C]//Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern
Recognition. 2021: 7352-7362.
[22] Zhao N, Chua T S, Lee G H. Few-shot 3d point cloud
semantic segmentation[C]//Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition. 2021:
8873-8882.
[23] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you
need[J]. Advances in neural information processing systems,
2017, 30.
[24] Song S, Lichtenberg S P, Xiao J. Sun rgb-d: A rgb-d scene
understanding benchmark suite[C]//Proceedings of the IEEE
conference on computer vision and pattern recognition. 2015:
567-576.
[25] Dai A, Chang A X, Savva M, et al. Scannet:
Richly-annotated 3d reconstructions of indoor
scenes[C]//Proceedings of the IEEE conference on computer
vision and pattern recognition. 2017: 5828-5839.
[26] Liu J, Dong X, Zhao S, et al. Generalized Few-Shot 3D
Object Detection of LiDAR Point Cloud for Autonomous
Driving[J]. arxiv preprint arxi v:2302.03914, 2023.
[27] Xie S, Gu J, Guo D, et al. Pointcontrast: Unsupervised
pre-training for 3d point cloud understanding[C]//Computer
Vision–ECCV 2020: 16th European Conference, Glasgow, UK,
August 23–28, 2020, Proceedings, Part III 16. Springer
International Publishing, 2020: 574-591
[28] Yamada R, Kataoka H, Chiba N, et al. Point cloud
pre-training with natural 3D structures[C]//Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern
Recognition. 2022: 21283-21293.
[29] Yuan S, Li X, Huang H, et al. Meta-Det3D: Learn to Learn
Few-Shot 3D Object Detection[C]//Proceedings of the Asian
Conference on Computer Vision. 2022: 1761-1776.
[30] Tang W, Biqi Y, Li X, et al. Prototypical Variational
Autoencoder for 3D Few-shot Object
Detection[C]//Thirty-seventh Conference on Neural Information
Processing Systems. 2023.
|