[1] ZHANG L F, SONG J B, GAO A N, et al.Be your own teacher:improve the performance of convolutional neural networks via self-distillation[C]//Proceedings of International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2019:3712-3721. [2] WANG Y Q, YAO Q M, KWOK J T, et al.Generalizing from a few examples:a survey on few-shot learning[J].ACM Computing Surveys, 2021, 53(3):63. [3] WANG Q Y, LU Y, ZHANG X K, et al.Region of interest selection for functional features[J].Neurocomputing, 2021, 422:235-244. [4] WOO S, PARK J, LEE J Y, et al.CBAM:convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision.Berlin, Germany:Springer, 2018:3-19. [5] MUNKHDALAI T, YU H.Meta networks[J].Proceedings of Machine Learning Research, 2017, 70:2554-2563. [6] SANTORO A, BARTUNOV S, BOTVINICK M, et al.Meta-learning with memory-augmented neural networks[C]//Proceedings of the 33rd International Conference on Machine Learning.New York, USA:ACM Press, 2016:1842-1850. [7] RAVI S, LAROCHELLE H.Optimization as a model for few-shot learning[C]//Proceedings of International Conference on Learning Representations.Vancouver, Canda:[s.n.], 2016:1-9. [8] PERRETT T, MASULLO A, BURGHARDT T, et al.Temporal-relational cross Transformers for few-shot action recognition[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2021:475-484. [9] SNELL J, SWERSKY K, ZEMEL R.Prototypical networks for few-shot learning[C]//Proceedings of Conference on Advances in Neural Information Processing Systems.[S.l.]:AAAI Press, 2017:30. [10] ZHANG S Y, ZHOU J L, HE X M.Learning implicit temporal alignment for few-shot video classification[EB/OL].[2022-01-15].http://arXivpreprintarXiv:2105.04823. [11] FINN C, ABBEEL P, LEVINE S.Model-agnostic meta-learning for fast adaptation of deep networks[C]//Proceedings of the 34th International Conference on Machine Learning.New York, USA:ACM Press, 2017:1-10. [12] BHAT P, ARANI E, ZONOOZ B.Distill on the Go:online knowledge distillation in self-supervised learning[C]//Proceedings of Conference on Computer Vision and Pattern Recognition Workshops.Washington D.C., USA:IEEE Press, 2021:2672-2681. [13] KIM K, JI B, YOON D, et al.Self-knowledge distillation with progressive refinement of targets[C]//Proceedings of International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2021:6547-6556. [14] KE T W, MAIRE M, YU S X.Multigrid neural architectures[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2017:4067-4075. [15] LIN T Y, DOLLÁR P, GIRSHICK R, et al.Feature pyramid networks for object detection[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2017:936-944. [16] ZHAO H S, QI X J, SHEN X Y, et al.ICNet for real-time semantic segmentation on high-resolution images[C]//Proceedings of the European Conference on Computer Vision.Berlin, Germany:Springer, 2018:418-434. [17] SUN K, XIAO B, LIU D, et al.Deep high-resolution representation learning for human pose estimation[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:5686-5696. [18] HOWARD A G, ZHU M L, CHEN B, et al.MobileNets:efficient convolutional neural networks for mobile vision applications[EB/OL].[2022-01-15].https://arxiv.org/pdf/1704.04861.pdf. [19] CHEN Y P, FAN H Q, XU B, et al.Drop an octave:reducing spatial redundancy in convolutional neural networks with octave convolution[C]//Proceedings of International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2019:3434-3443. [20] CHIN T W, DING R, MARCULESCU D.Adascale:towards real-time video object detection using adaptive scaling[J].Proceedings of Machine Learning and Systems, 2019, 1:431-441. [21] DHILLON G S, CHAUDHARI P, RAVICHANDRAN A, et al.A baseline for few-shot image classification[EB/OL].[2022-01-15].https://arxiv.org/abs/1909.02729. [22] ZHANG H, WU C R, ZHANG Z Y, et al.ResNeSt:split-attention networks[EB/OL].[2022-01-15].http://arXiv:2004.08955. [23] ANIL R, PEREYRA G, PASSOS A, et al.Large scale distributed neural network training through online distillation[EB/OL].[2022-01-15].https://arxiv.org/pdf/1804.03235.pdf. [24] BAIK S, HONG S, LEE K M.Learning to forget for meta-learning[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2020:2376-2384. [25] LEE K, MAJI S, RAVICHANDRAN A, et al.Meta-learning with differentiable convex optimization[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:10649-10657. [26] VINYALS O, BLUNDELL C, LILLICRAP T, et al.Matching networks for one shot learning[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems.New York, USA:ACM Press, 2016:3637-3645. [27] XUE Z Y, XIE Z S, XING Z, et al.Relative position and map networks in few-shot learning for image classification[C]//Proceedings of Conference on Computer Vision and Pattern Recognition Workshops.Washington D.C., USA:IEEE Press, 2020:4032-4036. [28] SNELL J, SWERSKY K, ZEMEL R S.Prototypical networks for few-shot learning[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems.New York, USA:ACM Press, 2017:4080-4090. [29] ALLEN K R, SHELHAMER E, SHIN H, et al.Infinite mixture prototypes for few-shot learning[EB/OL].[2022-01-15].https://arxiv.org/pdf/1902.04552v1.pdf. [30] RAVICHANDRAN A, BHOTIKA R, SOATTO S.Few-shot learning with embedded class models and shot-free meta training[C]//Proceedings of International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2019:331-339. [31] SUNG F, YANG Y X, ZHANG L, et al.Learning to compare:relation network for few-shot learning[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:1199-1208. [32] LI W B, XU J L, HUO J, et al.Distribution consistency based covariance metric networks for few-shot learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence.[S.l.]:AAAI Press, 2019:8642-8649. [33] DONG C Q, LI W B, HUO J, et al.Learning task-aware local representations for few-shot learning[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence.Yokohama, Japan:[s.n.], 2020:716-722. [34] LI W B, WANG L, XU J L, et al.Revisiting local descriptor based image-to-class measure for few-shot learning[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:7253-7260. [35] LI W, WANG L, HUO J, et al.Asymmetric distribution measure for few-shot learning[EB/OL].[2022-01-15].http://arXivpreprintarXiv:2002.00153. |