作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2025, Vol. 51 ›› Issue (9): 242-251. doi: 10.19678/j.issn.1000-3428.0069754

• 图形图像处理 • 上一篇    下一篇

基于迁移类内变化增强数据的小样本学习方法

李小雨, 罗娜*()   

  1. 华东理工大学能源化工过程智能制造教育部重点实验室, 上海 200237
  • 收稿日期:2024-04-16 修回日期:2024-05-15 出版日期:2025-09-15 发布日期:2025-09-26
  • 通讯作者: 罗娜
  • 基金资助:
    杭州市萧山区2022年高层次人才创业创新"5213"计划项目

Few-Shot Learning Method with Augmentation Data Based on Transferring Intra-Class Variations

LI Xiaoyu, LUO Na*()   

  1. Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China
  • Received:2024-04-16 Revised:2024-05-15 Online:2025-09-15 Published:2025-09-26
  • Contact: LUO Na

摘要:

小样本学习致力于通过极少数量的训练样本, 甚至一个样本来实现对新类数据的分类。面对这种挑战, 数据增强成为小样本学习中一种直接而有效的解决方法, 但是确保增强数据的多样性和可辨别性是数据增强的关键。为此, 提出一种基于迁移基类类内变化的两阶段数据增强方法, 分为特征学习和小样本学习阶段。在特征学习阶段, 模型通过自监督任务学习基类数据的个体特征表达, 有监督任务则学习类辨别特征, 模型通过这两种特征获得基类数据的类内变化并建模基类的类内变化分布。在小样本学习阶段, 模型从基类的类内变化分布中采样与任务相关的类内变化信息并添加到小样本特征中, 以实现增强小样本数据的目的。实验结果表明, 在5-way 1-shot情况下, 所提方法在miniImageNet、tieredImageNet和CUB数据集上的分类性能相较于基线模型提升了4~7百分点, 在5-way 5-shot情况下提升了3~7百分点, 相较于其他数据增强方法, 也展现了具有竞争力的性能, 这表明生成的增强数据在保持可辨别性的同时增强小样本数据的多样性, 并验证了该方法的可行性和有效性。

关键词: 小样本学习, 数据增强, 类内变化, 类辨别特征, 个体特征

Abstract:

Few-shot learning aims to classify new categories based on only one or a few examples. To address this problem, data augmentation is often used as a direct and effective approach. Further, the augmented data should be diverse and discriminable. This paper proposes a new two-stage data augmentation method based on the transfer of intra-class variations in the base classes. The learning process is divided into a representation learning stage and a few-shot learning stage. In the representation learning stage, representations of instance-specific features of the base-class data are obtained through self-supervised tasks and class-specific features are obtained through supervised tasks. Intra-class variations of the base-class data are calculated using these two features, and the distributions of intra-class variations for each base class are modeled. In the few-shot learning stage, the model samples task-related intra-class variation information from the intra-class variation distributions of the base class and adds it to the few-shot features to enhance the few-shot data. The experimental results show that the 5-way 1-shot classification performance of the proposed method on the miniImageNet, tieredImageNet, and CUB datasets is improved by 4 to 7 percentage points compared to the baseline model, and by 3 to 7 percentage points in 5-way 5-shot classification. Its performance is competitive when compared with other existing data augmentation methods. This indicates that the generated enhanced data can improve the diversity of few-shot data while maintaining discriminability, verifying our method's feasibility and effectiveness.

Key words: few-shot learning, data augmentation, intra-class variation, class-specific feature, instance-specific feature