[1] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2012, 60: 84-90. [2] WANG Y Q, YAO Q M, KWOK J T, et al. Generalizing from a few examples: a survey on few-shot learning[J]. ACM Computing Surveys, 2021, 53(3): 1-34. [3] 张河萍, 方志军, 卢俊鑫, 等. 基于知识增强自适应原型网络的小样本关系分类[J]. 计算机工程, 2025, 51(4): 129-136. ZHANG H P, FANG Z J, LU J X, et al. Classification of small sample relationships based on knowledge-enhanced adaptive prototype network[J]. Computer Engineering, 2025, 51(4): 129-136. (in Chinese) [4] LI F F, FERGUS R, PERONA P. One-shot learning of object categories[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(4): 594-611. [5] ZHENG K P, ZHANG H S, HUANG W R. DiffKendall: a novel approach for few-shot learning with differentiable Kendall’s rank correlation[EB/OL].[2024-05-05]. https://arxiv.org/abs/2307.15317. [6] KANG S, HWANG D, EO M, et al. Meta-learning with a geometry-adaptive preconditioner[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C.,USA:IEEE Press,2023: 16080-16090. [7] LU Y N, WEN L J, LIU J Z, et al. Self-supervision can be a good few-shot learner[EB/OL].[2024-05-05]. https://arxiv.org/pdf/2207.09176. [8] HILLER M, HARANDI M, DRUMMOND T. On enforcing better conditioned meta-learning for rapid few-shot adaptation[C]//Proceedings of the 36th International Conference on Neural Information Processing Systems. New York,USA:ACM Press,2022: 4059-4071. [9] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[EB/OL].[2024-05-05]. https://arxiv.org/abs/1706.03762. [10] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words: transformers for image recognition at scale[EB/OL].[2024-05-05]. https://arxiv.org/abs/2010.11929. [11] SUN C, SHRIVASTAVA A, SINGH S, et al. Revisiting unreasonable effectiveness of data in deep learning era[C]//Proceedings of the IEEE International Conference on Computer Vision (ICCV). Washington D.C.,USA:IEEE Press,2017: 843-852. [12] 李清格, 杨小冈, 卢瑞涛, 等. 计算机视觉中的Transformer发展综述[J]. 小型微型计算机系统, 2023, 44(4): 850-861. LI Q G, YANG X G, LU R T, et al. Transformer in computer vision: a survey[J]. Journal of Chinese Computer Systems, 2023, 44(4): 850-861. (in Chinese) [13] LU Z Y, XIE H T, LIU C B, et al. Bridging the gap between vision transformers and convolutional neural networks on small datasets[C]//Proceedings of the 36th International Conference on Neural Information Processing Systems. New York,USA:ACM Press,2022: 14663-14677. [14] FANG S, LI K Y, LI Z. Salient positions based attention network for image classification[EB/OL].[2024-05-05]. https://arxiv.org/abs/2106.04996. [15] YE H J, MING L, ZHAN D C, et al. Few-shot learning with a strong teacher[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(3): 1425-1440. [16] VINYALS O, BLUNDELL C, LILLICRAP T, et al. Matching networks for one shot learning[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems. New York,USA:ACM Press,2016: 3637-3645. [17] REN M Y, TRIANTAFILLOU E, RAVI S, et al. Meta-learning for semi-supervised few-shot classification[EB/OL].[2024-05-05]. https://arxiv.org/abs/1803.00676. [18] HE X T, PENG Y X. Fine-grained visual-textual representation learning[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(2): 520-531. [19] HENDRYCKS D, GIMPEL K. Gaussian Error Linear Units (GELUs)[EB/OL].[2024-05-05]. https://arxiv.org/abs/1606.08415. [20] MCCULLOCH W S, PITTS W. A logical calculus of the ideas immanent in nervous activity[J]. Bulletin of Mathematical Biology, 1990, 52(1/2): 99-115. [21] BRIDLE J S. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition[EB/OL].[2024-05-05]. https://link.springer.com/chapter/10.1007/978-3-642-76153-9_28. [22] LOSHCHILOV I, HUTTER F. Decoupled weight decay regularization[EB/OL].[2024-05-05]. https://arxiv.org/abs/1711.05101. [23] SUNG F, YANG Y X, ZHANG L, et al. Learning to compare: relation network for few-shot learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2018: 1199-1208. [24] SNELL J, SWERSKY K, ZEMEL R S. Prototypical networks for few-shot learning[C].//Proceedings of the 31st International Conference on Neural Information Processing Systems. New York,USA:ACM Press,2017: 4077-4087. [25] FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]//Proceedings of the 34th International Conference on Machine Learning. New York,USA:ACM Press,2017: 1126-1135. [26] CHEN Y B, LIU Z, XU H J, et al. Meta-baseline: exploring simple meta-learning for few-shot learning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C.,USA:IEEE Press,2022: 9042-9051. [27] YOON S W, SEO J, MOON J. TapNet: neural network augmented with task-adaptive projection for few-shot learning[EB/OL].[2024-05-05]. https://arxiv.org/pdf/1905.06549. [28] FLENNERHAG S, RUSU A A, PASCANU R, et al. Meta-learning with warped gradient descent[EB/OL].[2024-05-05]. https://arxiv.org/abs/1909.00025. [29] RAJASEGARAN J, KHAN S, HAYAT M, et al. Meta-learning the learning trends shared across tasks[EB/OL].[2024-05-05]. https://arxiv.org/abs/2010.09291. [30] FAN C, RAM P, LIU S J. Sign-MAML: efficient model-agnostic meta-learning by SignSGD[EB/OL].[2024-05-05]. https://arxiv.org/abs/2109.07497. [31] PENG D N, PAN S J. Clustered task-aware meta-learning by learning from learning paths[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(8): 9426-9438. [32] ABBAS M, XIAO Q W, CHEN L S, et al. Sharp-MAML: sharpness-aware model-agnostic meta learning[EB/OL].[2024-05-05]. https://arxiv.org/abs/2206.03996. [33] 李小雨, 罗娜. 基于迁移类内变化增强数据的小样本学习方法[J]. 计算机工程, 2025, 51(9): 242-251. LI X Y, LUO N. Few-shot learning method with augmentation data based on transferring intra-class variations[J]. Computer Engineering, 2025, 51(9): 242-251. (in Chinese) [34] PRZEWIE AZ'G LIKOWSKI M, PRZYBYSZ P, TABOR J, et al. HyperMAML: few-shot adaptation of deep models with hypernetworks[EB/OL].[2024-05-05].https://arxiv.org/abs/2205.15745. [35] LI G, REN B Y, WANG H Z. EEML: ensemble embedded meta-learning[EB/OL].[2024-05-05]. https://arxiv.org/abs/2206.09195. [36] CHEN Z Y, GE J X, ZHAN H S, et al. Pareto self-supervised training for few-shot learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C.,USA:IEEE Press,2021: 13658-13667. [37] CHEN T, KORNBLITH S, NOROUZI M, et al. A simple framework for contrastive learning of visual representations[EB/OL].[2024-05-05]. https://arxiv.org/abs/2002.05709. [38] CHEN X L, HE K M. Exploring simple Siamese representation learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C.,USA:IEEE Press,2021: 15745-15753. [39] AFHAM M, KHAN S, KHAN M H, et al. Rich semantics improve few-shot learning[EB/OL].[2024-05-05]. https://arxiv.org/abs/2104.12709. [40] ZHOU Z Q, QIU X, XIE J T, et al. Binocular mutual learning for improving few-shot classification[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C.,USA:IEEE Press,2022: 8382-8391. [41] YE H J, HU H X, ZHAN D C, et al. Few-shot learning via embedding adaptation with set-to-set functions[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C.,USA:IEEE Press,2020: 8805-8814. [42] HUANG S Y, MA J W, HAN G X, et al. Task-adaptive negative envision for few-shot open-set recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C.,USA:IEEE Press,2022: 7161-7170. [43] KANG D, KWON H, MIN J H, et al. Relational embedding for few-shot classification[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C.,USA:IEEE Press,2022: 8802-8813. [44] ATANBORI J, ROSE S. MergedNET: a simple approach for one-shot learning in Siamese networks based on similarity layers[J]. Neurocomputing, 2022, 509: 1-10. [45] LIU B, CAO Y, LIN Y T, et al. Negative margin matters: understanding margin in few-shot classification[EB/OL].[2024-05-05]. https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123490426.pdf. [46] ZHANG C, CAI Y J, LIN G S, et al. DeepEMD: few-shot image classification with differentiable earth mover’s distance and structured classifiers[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C.,USA:IEEE Press,2020: 12200-12210. [47] CHEN C F, YANG X S, XU C S, et al. ECKPN: explicit class knowledge propagation network for transductive few-shot learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C.,USA:IEEE Press,2021: 6592-6601. [48] KIM M, HOSPEDALES T. Gaussian process meta few-shot classifier learning via linear discriminant Laplace approximation[EB/OL].[2024-05-05]. https://arxiv.org/abs/2111.05392. [49] SENDERA M, PRZEWIE AZ'G LIKOWSKI M, KARANOWSKI K, et al. HyperShot: few-shot learning by kernel hypernetworks[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Washington D.C.,USA:IEEE Press,2023: 2468-2477. [50] VAN DER MAATEN L, HINTON G. Visualizing data using t-SNE[J]. Journal of Machine Learning Research, 2008, 9(11): 2579-2605. |