[1] 赵凯琳, 靳小龙, 王元卓.小样本学习研究综述[J].软件学报, 2021, 32(2):349-369. ZHAO K L, JIN X L, WANG Y Z.Survey on few-shot learning[J].Journal of Software, 2021, 32(2):349-369.(in Chinese) [2] SHU J, XU Z B, MENG D Y.Small sample learning in big data era[EB/OL].[2022-06-10].https://arxiv.org/abs/1808.04572. [3] LI F F, FERGUS R, PERONA P.One-shot learning of object categories[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(4):594-611. [4] 宋闯, 赵佳佳, 王康, 等.面向智能感知的小样本学习研究综述[J].航空学报, 2020, 41(S1):723-756. SONG C, ZHAO J J, WANG K, et al.A survey of few shot learning based on intelligent perception[J].Acta Aeronautica et Astronautica Sinica, 2020, 41(S1):723-756.(in Chinese) [5] 葛轶洲, 刘恒, 王言, 等.小样本困境下的深度学习图像识别综述[J].软件学报, 2022, 33(1):193-210. GE Y Z, LIU H, WANG Y, et al.Survey on deep learning image recognition in dilemma of small samples[J].Journal of Software, 2022, 33(1):193-210.(in Chinese) [6] KAISER L, NACHUM O, ROY A, et al.Learning to remember rare events[C]//Proceedings of IEEE ICLRʼ17.Washington D.C., USA:IEEE Press, 2017:1568-1577. [7] DUAN Y, ANDRYCHOWICZ M, STADIE B C, et al.One-shot imitation learning[EB/OL].[2022-06-10].https://arxiv.org/abs/1703.07326. [8] 刘嘉政.基于卷积神经网络的小样本树皮图像识别方法[J].西北林学院学报, 2019, 34(4):230-235. LIU J Z.Small sample bark image recognition method based on convolutional neural network[J].Journal of Northwest Forestry University, 2019, 34(4):230-235.(in Chinese) [9] KANG B Y, LIU Z, WANG X, et al.Few-shot object detection via feature reweighting[C]//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2019:8419-8428. [10] PFISTER T, CHARLES J, ZISSERMAN A.Domain-adaptive discriminative one-shot learning of gestures[C]//Proceedings of IEEE ECCVʼ14.Washington D.C., USA:IEEE Press, 2014:814-829. [11] 孙存威, 文畅, 谢凯, 等.深度迁移模型下的小样本声纹识别方法[J].计算机工程与设计, 2018, 43(12):3816-3822. SUN C W, WEN C, XIE K, et al.Voiceprint recognition method of small sample based on deep migration model[J].Computer Engineering and Design, 2018, 43(12):3816-3822.(in Chinese) [12] 程林, 袁慊, 王瑜, 等.自体支气管基底层细胞治疗慢性阻塞性肺疾病的小样本探索性研究[J].重庆医学, 2019, 48(23):4012-4016. CHENG L, YUAN Q, WANG Y, et al.A small sample exploratory study of autogenous bronchial basal cells for the treatment of chronic obstructive pulmonary disease[J].Chongqing Medical, 2019, 48(23):4012-4016.(in Chinese) [13] 何喜军, 马珊, 武玉英, 等.小样本下多维指标融合的电商产品销量预测[J].计算机工程与应用, 2019, 55(15):177-184. HE X J, MA S, WU Y Y, et al.E-commerce product sales forecast with multi-dimensional index integration under small sample[J].Computer Engineering and Applications, 2019, 55(15):177-184.(in Chinese) [14] 陈龙, 张峰, 蒋升.小样本条件下基于深度森林学习模型的典型军事目标识别方法[J].中国电子科学研究院学报, 2019, 14(3):232-237. CHEN L, ZHANG F, JIANG S.Deep forest learning for military object recognition under small training set condition[J].Journal of Chinese Academy of Electronics, 2019, 14(3):232-237.(in Chinese) [15] 王佳林.基于深度学习的小样本入侵检测研究[D].北京:北京交通大学, 2020. WANG J L.Research on intrusion detection of small samples based on deep learning[D].Beijing:Beijing Jiaotong University, 2020.(in Chinese) [16] LI F F, PERONA F R.A Bayesian approach to unsupervised one-shot learning of object categories[C]//Proceedings of the 9th IEEE International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2019:1134-1141. [17] MITCHELL T M.Machine learning[M].[S.1.]:McGraw-Hill, Inc., 1997. [18] 吴章凯.基于特征增强元学习优化的小样本学习的研究[D].南京:南京大学, 2020. WU Z K.A research on few-shot learning based on feature enhanced meta-learning optimization[D].Nanjing:Nanjing University, 2020.(in Chinese) [19] DHILLON G S, CHAUDHARI P, RAVICHANDRAN A, et al.A baseline for few-shot image classification[C]//Proceedings of IEEE ICLRʼ20.Washington D.C., USA:IEEE Press, 2020:2542-1556. [20] WANG X, HUANG T E, DARRELL T, et al.Frustratingly simple few-shot object detection[EB/OL].[2022-06-10].https://arxiv.org/abs/2003.06957. [21] HOWARD J, RUDER S.Universal language model fine-tuning for text classification[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.Stroudsburg, USA:Association for Computational Linguistics, 2018:3517-3529. [22] NAKAMURA A, HARADA T.Revisiting fine-tuning for few-shot learning[EB/OL].[2022-06-10].https://arxiv.org/abs/1910.00216. [23] LIU H K, TAM D, MUQEETH M, et al.Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning[EB/OL].[2022-06-10].https://arxiv.org/abs/2205.05638. [24] HSU K, LEVINE S, FINN C.Unsupervised learning via meta-learning[C]//Proceedings of IEEE International Conference on Learning Representations.Washington D.C., USA:IEEE Press, 2019:235-246. [25] JI Z L, ZOU X L, HUANG T J, et al.Unsupervised few-shot learning via self-supervised training[EB/OL].[2022-06-10].https://arxiv.org/abs/1912.12178. [26] KHODADADEH S, BLNI L, SHAH M.Unsupervised meta-learning for few-shot image classification[C]//Proceedings of NIPSʼ19.Cambridge, USA:MIT Press, 2019:233-246. [27] ANTONIOU A, STORKEY A.Assume, augment and learn:unsupervised few-shot meta-learning via random labels and data augmentation[EB/OL].[2022-06-10].https://arxiv.org/abs/1902.09884. [28] WANG Y K, XU C M, LIU C, et al.Instance credibility inference for few-shot learning[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2020:12833-12842. [29] 秦铁鑫.基于数据扩充的小样本学习算法研究[D].南京:南京大学, 2021. QIN T X.The study of data augmentation based few-shot learning algorithms[D].Nanjing:Nanjing University, 2021.(in Chinese) [30] QIN T, LI W, SHI Y, et al.Diversity helps:unsupervised few-shot learning via distribution shift-based data augmentation[EB/OL].[2022-06-10].https://arxiv.org/abs/2004.05805. [31] LI P, ZHAO G P, XU X H.Coarse-to-fine few-shot classification with deep metric learning[J].Information Sciences, 2022, 610:592-604. [32] REN M Y, TRIANTAFILLOU E, RAVI S, et al.Meta-learning for semi-supervised few-shot classification[EB/OL].[2022-06-10].https://arxiv.org/abs/1803.00676. [33] LIU Y, LEE J, PARK M, et al.Learning to propagate labels:transductive propagation network for few-shot learning[EB/OL].[2022-06-10].https://arxiv.org/abs/1805.10002. [34] WANG Y X, HEBERT M.Learning from small sample sets by combining unsupervised meta-training with CNNs[C]//Proceedings of NIPSʼ19.Cambridge, USA:MIT Press, 2016:244-252. [35] BONEY R, ILIN A.Semi-supervised few-shot learning with MAML[C]//Proceedings of ICLRʼ18.Washington D.C., USA:IEEE Press, 2018:568-579. [36] HOU R B, CHANG H, MA B P, et al.Cross attention network for few-shot classification[EB/OL].[2022-06-10].https://arxiv.org/abs/1910.07677. [37] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al.Generative adversarial nets[C]//Proceedings of NIPSʼ14.Cambridge, USA:MIT Press, 2014:2672-2680. [38] ANTONIOU A, STORKEY A, EDWARDS H.Data augmentation generative adversarial networks[EB/OL].[2022-06-10].https://arxiv.org/abs/1711.04340. [39] MEHROTRA A, DUKKIPATI A.Generative adversarial residual pairwise networks for one shot learning[EB/OL].[2022-06-10].https://arxiv.org/abs/1703.08033. [40] CHEN Z T, FU Y W, WANG Y X, et al.Image deformation meta-networks for one-shot learning[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:8672-8681. [41] WANG Y X, GIRSHICK R, HEBERT M, et al.Low-shot learning from imaginary data[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:7278-7286. [42] ZHANG R, CHE T, GHAHRAMANI Z, et al.Metagan:an adversarial approach to few-shot learning[C]//Proceedings of NIPSʼ19.Cambridge, USA:MIT Press 2018:2365-2374. [43] SCHWARTZ E, KARLINSKY L, SHTOK J, et al.Delta-encoder:an effective sample synthesis method for few-shot object recognition[C]//Proceedings of NIPSʼ18.Cambridge, USA:MIT Press, 2018:2845-2855P. [44] XIAN Y Q, SHARMA S, SCHIELE B, et al.F-VAEGAN-D2:a feature generating framework for any-shot learning[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:10267-10276. [45] CHEN W J.Semi-supervised learning study summary[J].Computer Knowledge and Technology, 2011, 7(16):3887-3889. [46] SUBEDI B, SATHISHKUMAR V E, MAHESHWARI V, et al.Feature learning-based generative adversarial network data augmentation for class-based few-shot learning[J].Mathematical Problems in Engineering, 2022, 22:1-20. [47] DIXIT M, KWITT R, NIETHAMMER M, et al.AGA:attribute-guided augmentation[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2017:3328-3336. [48] HARIHARAN B, GIRSHICK R.Low-shot visual recognition by shrinking and hallucinating features[C]//Proceedings of 2017 IEEE International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2017:3037-3046. [49] LIU B, WANG X D, DIXIT M, et al.Feature space transfer for data augmentation[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:9090-9098. [50] CHEN Z T, FU Y W, ZHANG Y D, et al.Multi-level semantic feature augmentation for one-shot learning[EB/OL].[2022-06-10].https://arxiv.org/abs/1804.05298. [51] ZHANG H G, ZHANG J, KONIUSZ P.Few-shot learning via saliency-guided hallucination of samples[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:2765-2774. [52] 周琳钧.小样本学习理论方法研究[D].北京:清华大学, 2021. ZHOU L J.Few-shot learning:theories and methods[D].Beijing:Tsinghua University, 2021.(in Chinese) [53] TSENG H Y, LEE H Y, HUANG J B, et al.Cross-domain few-shot classification via learned feature-wise transformation[EB/OL].[2022-06-10].https://arxiv.org/abs/2001.08735. [54] JING K L, ZHANG X M, YANG Z Y, et al.Feature augmentation learning for few-shot palmprint image recognition with unconstrained acquisition[C]//Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing.Washington D.C., USA:IEEE Press, 2022:3323-3327. [55] AURELIEN B, AMAURY H, MARC S.A survey on metric learning for feature vectors andstructured data[EB/OL].[2022-06-10].https://arxiv.org/abs/1306.6709. [56] SNELL J, SWERSKY K, ZEMEL R S.Prototypical networks for few-shot learning[C]//Proceedings of NIPSʼ17.Cambridge, USA:MIT Press, 2017:4077-4087. [57] SUNG F, YANG Y X, ZHANG L, et al.Learning to compare:relation network for few-shot learning[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:1199-1208. [58] LI W B, WANG L, XU J L, et al.Revisiting local descriptor based image-to-class measure for few-shot learning[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:7253-7260. [59] LI W B, XU J L, HUO J, et al.Distribution consistency based covariance metric networks for few-shot learning[C]//Proceedings of AAAI Conference on Artificial Intelligence.[S.1.]:AAAI Press, 2019:8642-8649. [60] VINYALS O, BLUNDELL C, LILLICRAP T, et al.Matching networks for one shot learning[EB/OL].[2022-06-10].https://arxiv.org/abs/1606.04080. [61] HAN G X, HUANG S Y, MA J W, et al.Meta faster R-CNN:towards accurate few-shot object detection with attentive feature alignment[C]//Proceedings of AAAI Conference on Artificial Intelligence.[S.1.]:AAAI Press, 2022:780-789. [62] ZHANG J H, MAIMAITI M, XING G, et al.MGIMN:multi-grained interactive matching network for few-shot text classification[EB/OL].[2022-06-10].https://arxiv.org/abs/2204.04952. [63] LIU B, CAO Y, LIN Y T, et al.Negative margin matters:understanding margin in few-shot classification[EB/OL].[2022-06-10].https://arxiv.org/abs/2003.12060. [64] 孙牧野.基于度量学习的小样本图像分类技术研究[D].西安:西安电子科技大学, 2020. SUN M Y.Research on few shot learning for image classification based on metric learning[D].Xi'an:Xidian University, 2020.(in Chinese) [65] CHAI Y H, DU L, QIU J, et al.Dynamic prototype network based on sample adaptation for few-shot malware detection[J].IEEE Transactions on Knowledge and Data Engineering, 2022, 99(35):1-10. [66] SUNG F, YANG Y X, ZHANG L, et al.Learning to compare:relation network for few-shot learning[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:1199-1208. [67] 安晨, 汪成亮, 廖超, 等.基于注意力关系网络的无线胶囊内镜图像分类方法[J].计算机工程, 2021, 47(10):252-259, 268. AN C, WANG C L, LIAO C, et al.Wireless capsule endoscopy image classification method based on attention relational network[J].Computer Engineering, 2021, 47(10):252-259, 268.(in Chinese) [68] ABDELAZIZ M, ZHANG Z P.Multi-scale kronecker-product relation networks for few-shot learning[J].Multimedia Tools and Applications, 2022, 81(5):6703-6722. [69] GARCIA V, BRUNA J.Few-shot learning with graph neural networks[EB/OL].[2022-06-10].https://arxiv.org/abs/1711.04043. [70] 袁自勇, 高曙, 曹姣, 等.基于异构图卷积网络的小样本短文本分类方法[J].计算机工程, 2021, 47(12):87-94. YUAN Z Y, GAO S, CAO J, et al.Method for few-shot short text classification based on heterogeneous graph convolutional network[J].Computer Engineering, 2021, 47(12):87-94.(in Chinese) [71] TAO X Y, HONG X P, CHANG X Y, et al.Few-shot class-incremental learning[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2020:12180-12189. [72] YU T Y, HE S, SONG Y Z, et al.Hybrid graph neural networks for few-shot learning[C]//Proceedings of AAAI Conference on Artificial Intelligence.[S.1.]:AAAI Press, 2022:3179-3187. [73] ZHAO K K, ZHANG Z Y, JIANG B, et al.LGLNN:label guided graph learning-neural network for few-shot learning[J].Neural Networks, 2022, 155:50-57. [74] 刘鑫, 周凯锐, 何玉琳, 等.基于度量的小样本分类方法研究综述[J].模式识别与人工智能, 2021, 34(10):909-923. LIU X, ZHOU K R, HE Y L, et al.Survey of metric-based few-shot classification[J].Pattern Recognition and Artificial Intelligence, 2021, 34(10):909-923.(in Chinese) [75] BENGIO, YOSHUA.A meta-transfer objective for learning to disentangle causal mechanisms[C]//Proceedings of IEEE ICLRʼ20.Washington D.C., USA:IEEE Press, 2020:356-368. [76] FINN C, ABBEEL P, LEVINE S.Model-agnostic meta-learning for fast adaptation of deep net-works[C]//Proceedings of IEEE ICML ʼ20.Washington D.C., USA:IEEE Press, 2017:1126-1135. [77] JAMAL M A, QI G J.Task agnostic meta-learning for few-shot learning[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:11711-11719. [78] JIANG X, HAVAEI M, CHARTRAND G, et al.On the importance of attention in meta-learning for few-shot text classification[EB/OL].[2022-06-10].https://arxiv.org/abs/1806.00852. [79] LI Z G, ZHOU F W, CHEN F, et al.Meta-SGD:learning to learn quickly for few-shot learning[EB/OL].[2022-06-10].https://arxiv.org/abs/1707.09835. [80] RUSU A A, RAO D, SYGNOWSKI J, et al.Meta-learning with latent embedding optimization[EB/OL].[2022-06-10].https://arxiv.org/abs/1807.05960. [81] LIU G D, WANG T L, ZHANG S X.Generating pseudo-labels adaptively for few-shot model-agnostic meta-learning[EB/OL].[2022-06-10].https://arxiv.org/abs/2207.0421. [82] LOPEZ-PAZ D, RANZATO M.Gradient episodic memory for continual learning[EB/OL].[2022-06-10].https://arxiv.org/abs/1706.08840. [83] SOH J W, CHO S, CHO N I.Meta-transfer learning for zero-shot super-resolution[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2020:3513-3522. [84] LEI S Z, DONG B H, SHAN A K, et al.Attention meta-transfer learning approach for few-shot iris recognition[J].Computers and Electrical Engineering, 2022, 99:107848. [85] RAVI S, LAROCHELLE H.Optimization as a model for few-shot learning[C]//Proceedings of the 5th International Conference on Learning Representations.Washington D.C., USA:IEEE Press, 2016:458-469. [86] SANTORO A, BARTUNOV S, BOTVINICK M, et al.One-shot learning with memory-augmented neural networks[EB/OL].[2022-06-10].https://arxiv.org/abs/1605.06065. [87] 刘颖, 雷研博, 范九伦, 等.基于小样本学习的图像分类技术综述[J].自动化学报, 2021, 47(2):297-315. LIU Y, LEI Y B, FAN J L, et al.Survey on image classification technology based on small sample learning[J].Acta Automatica Sinica, 2021, 47(2):297-315.(in Chinese) [88] 李凡长, 刘洋, 吴鹏翔, 等.元学习研究综述[J].计算机学报, 2021, 44(2):422-446. LI F Z, LIU Y, WU P X, et al.A survey on recent advances in meta-learning[J].Chinese Journal of Computers, 2021, 44(2):422-446.(in Chinese) |