[1] XIE D Z, PANG C, WU G H, et al. Feature channel adaptive enhancement for fine-grained visual classification[EB/OL].[2024-05-05]. https://link.springer.com/chapter/10.1007/978-3-031-47665-5_16. [2] CAO S Y, WANG W, ZHANG J, et al. A few-shot fine-grained image classification method leveraging global and local structures[J]. International Journal of Machine Learning and Cybernetics, 2022, 13(8): 2273-2281. [3] ZHANG Y. A fine-grained image classification and detection method based on convolutional neural network fused with attention mechanism[J]. Computational Intelligence and Neuroscience, 2022, 2022: 2974960. [4] HUANG H X, ZHANG J J, ZHANG J, et al. Compare more nuanced: pairwise alignment bilinear network for few-shot fine-grained learning[C]//Proceedings of the IEEE International Conference on Multimedia and Expo (ICME). Washington D.C.,USA:IEEE Press,2019: 91-96. [5] ZHENG Z J, FENG X, YU H Q, et al. BDLA: bi-directional local alignment for few-shot learning[J]. Applied Intelligence, 2023, 53(1): 769-785. [6] 贺小箭, 林金福. 融合弱监督目标定位的细粒度小样本学习[J]. 中国图象图形学报, 2022, 27(7): 2226-2239. HE X J, LIN J F. Weakly-supervised object localization based fine-grained few-shot learning[J]. Journal of Image and Graphics, 2022, 27(7): 2226-2239. (in Chinese) [7] GE W F, LIN X R, YU Y Z. Weakly supervised complementary parts models for fine-grained image classification from the bottom up[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C.,USA:IEEE Press,2020: 3029-3038. [8] 白尚旺, 王梦瑶, 胡静, 等. 多区域注意力的细粒度图像分类网络[J]. 计算机工程, 2024, 50(1): 271-278. BAI S W, WANG M Y, HU J, et al. Multi-region attention network for fine-grained image classification[J]. Computer Engineering, 2024, 50(1): 271-278. (in Chinese) [9] QIAN L L, YU T, YANG J Y. Multi-scale feature fusion of covariance pooling networks for fine-grained visual recognition[J]. Sensors, 2023, 23(8): 3970. [10] 李小雨, 罗娜. 基于迁移类内变化增强数据的小样本学习方法[J]. 计算机工程, 2025, 51(9): 242-251. LI X Y, LUO N. Few-shot learning method with augmentation data based on transferring intra-class variations[J]. Computer Engineering, 2025, 51(9): 242-251. (in Chinese) [11] FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]//Proceedings of the International Conference on Machine Learning. Washington D.C.,USA:IEEE Press,2017: 12-24. [12] SNELL J, SWERSKY K, ZEMEL R. Prototypical networks for few-shot learning[J]. Advances in Neural Information Processing Systems, 2017, 30:35-42. [13] YAN L L, LI F Z, ZHANG L, et al. Discriminant space metric network for few-shot image classification[J]. Applied Intelligence, 2023, 53(14): 17444-17459. [14] WEI X S, WANG P, LIU L Q, et al. Piecewise classifier mappings: learning fine-grained learners for novel categories with few examples[J]. IEEE Transactions on Image Processing, 2019, 28(12): 6116-6125. [15] SONG Q, ZHOU S, XU L. Learning more discriminative local descriptors for few-shot learning[EB/OL].[2024-05-05]. https://arxiv.org/abs/2305.08721. [16] MENG X X, WANG X W, YIN S L, et al. Few-shot image classification algorithm based on attention mechanism and weight fusion[J]. Journal of Engineering and Applied Science, 2023, 70(1): 14. [17] YANG Y C, SOATTO S. FDA: Fourier domain adaptation for semantic segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C.,USA:IEEE Press,2020: 4084-4094. [18] LIN H, TSE R, TANG S K, et al. Few-shot learning for plant-disease recognition in the frequency domain[J]. Plants, 2022, 11(21): 2814. [19] CHENG H, YANG S Y, ZHOU J T, et al. Frequency guidance matters in few-shot learning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C.,USA:IEEE Press,2024: 11780-11790. [20] QIN Z Q, ZHANG P Y, WU F, et al. FcaNet: frequency channel attention networks[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C.,USA:IEEE Press,2022: 763-772. [21] LIU H F, PENG P, CHEN T, et al. FECANet: boosting few-shot semantic segmentation with feature-enhanced context-aware network[J]. IEEE Transactions on Multimedia, 2023, 25: 8580-8592. [22] LI X X, WU J J, SUN Z, et al. BSNet: bi-similarity network for few-shot fine-grained image classification[J]. IEEE Transactions on Image Processing, 2021, 30: 1318-1331. [23] LIAO J J, LEWIS J W. A note on concordance correlation coefficient[J]. PDA Journal of Pharmaceutical Science and Technology, 2000, 54(1): 23-26. [24] DENG J, DONG W, SOCHER R, et al. ImageNet: a large-scale hierarchical image database[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2009: 248-255. [25] SONG H O, XIANG Y, JEGELKA S, et al. Deep metric learning via lifted structured feature embedding[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C.,USA:IEEE Press,2016: 4004-4012. [26] KHOSLA A, JAYADEVAPRAKASH N, YAO B, et al. Novel dataset for fine-grained image categorization: Stanford Dogs[EB/OL].[2024-05-05]. https://people.csail.mit.edu/khosla/papers/fgvc2011.pdf. [27] KRAUSE J, STARK M, JIA D, et al. 3D object representations for fine-grained categorization[C]//Proceedings of the IEEE International Conference on Computer Vision Workshops. Washington D.C.,USA:IEEE Press,2014: 554-561. [28] VINYALS O, BLUNDELL C, LILLICRAP T, et al. Matching networks for one shot learning[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems. Washington D.C.,USA:IEEE Press,2016: 3637-3645. [29] SUNG F, YANG Y X, ZHANG L, et al. Learning to compare: relation network for few-shot learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2018: 1199-1208. [30] LI W B, WANG L, XU J L, et al. Revisiting local descriptor based image-to-class measure for few-shot learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C.,USA:IEEE Press,2020: 7253-7260. [31] XUE Z, DUAN L, LI W, et al. Region comparison network for interpretable few-shot image classification[EB/OL].[2024-05-05]. https://arxiv.org/abs/2009.03558. [32] OCHOA R G, MENDEZ R M, GONZALEZ-ZAPATA J, et al. Enforcing class separability in metric learning via two novel distance-based loss functions for few-shot image classification[EB/OL].[2024-05-05]. https://arxiv.org/abs/2305.09062. [33] YANG Y J, FENG Y X, ZHU L, et al. Feature fusion network based on few-shot fine-grained classification[J]. Frontiers in Neurorobotics, 2023, 17: 1301192. [34] JIANG Z H, KANG B Y, ZHOU K Q, et al. Few-shot classification via adaptive attention[EB/OL].[2024-05-05]. https://arxiv.org/abs/2008.02465. [35] XU Q Y, SU J, WANG Y, et al. Few-shot learning based on double pooling squeeze and excitation attention[J]. Electronics, 2023, 12(1): 27. |