[1] IDDAN G, MERON G, GLUKHOVSKY A, et al.Wireless capsule endoscopy[J].Nature, 2000, 405(6785):417-427. [2] YUAN Y, LI B, MENG M Q H.Bleeding frame and region detection in the wireless capsule endoscopy video[J].IEEE Journal of Biomedical and Health Informatics, 2015, 20(2):624-630. [3] QADIR H A, BALASINGHAM I, SOLHUSVIK J, et al.Improving automatic polyp detection using CNN by exploiting temporal dependency in colonoscopy video[J].IEEE Journal of Biomedical and Health Informatics, 2019, 24(1):180-193. [4] GOYAL M, REEVES N D, RAJBHANDARI S, et al.Robust methods for real-time diabetic foot ulcer detection and localization on mobile devices[J].IEEE Journal of Biomedical and Health Informatics, 2018, 23(4):1730-1741. [5] JEBARANI W S L, DAISY V J.Assessment of Crohn's disease lesions in wireless capsule endoscopy images using SVM based classification[C]//Proceeding of 2013 International Conference on Signal Processing, Image Processing & Pattern Recognition.Washington D.C., USA:IEEE Press, 2013:303-307. [6] COIMBRA M T, CUNHA J P S.MPEG-7 visual descriptors-contributions for automated feature extraction in capsule endoscopy[J].IEEE Transactions on Circuits and Systems for Video Technology, 2006, 16(5):628-637. [7] YUAN Y, LI B, MENG M Q H.WCE abnormality detection based on saliency and adaptive locality-constrained linear coding[J].IEEE Transactions on Automation Science and Engineering, 2016, 14(1):149-159. [8] YUAN Y, YAO X, HAN J, et al.Discriminative joint-feature topic model with dual constraints for WCE classification[J].IEEE Transactions on Cybernetics, 2017, 48(7):2074-2085. [9] SUNG F, YANG Y, ZHANG L, et al.Learning to compare:relation network for few-shot learning[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:1199-1208. [10] KOCH G R, ZEMEL R, SALAKHUTDINOV R.Siamese neural networks for one-shot image recognition[C]//Proceedings of the 32nd International Conference on Machine Learning.Washington D.C., USA:IEEE Press, 2015:103-123. [11] VINYALS O, BLUNDELL C, LILLICRAP T, et al.Matching networks for one shot learning[C]//Proceeding of Advances in Neural Information Processing Systems.Barcelona, Spain:NIPS, 2016:3630-3638. [12] SNELL J, SWERSKY K, ZEMEL R.Prototypical networks for few-shot learning[C]//Proceeding of Advances in Neural Information Processing Systems.Barcelona, Spain:NIPS, 2017:4077-4087. [13] HU J, SHEN L, SUN G.Squeeze-and-excitation networks[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:7132-7141. [14] WANG X, GIRSHICK R, GUPTA A, et al.Non-local neural networks[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:7794-7803. [15] WOO S, PARK J, LEE J Y, et al.CBAM:convolutional block attention module[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2018:3-19. [16] HE K, ZHANG X, REN S, et al.Deep residual learning for image recognition[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2016:770-778. [17] KOULAOUZIDIS A, IAKOVIDIS D K, YUNG D E, et al.KID project:an internet-based digital video atlas of capsule endoscopy for research purposes[J].Endoscopy International Open, 2017, 5(6):477-450. [18] KINGMA D P, BA J.Adam:a method for stochastic optimization[EB/OL].[2020-05-03].https://arxiv.xilesou.top/abs/1412.6980. [19] CHEN W Y, LIU Y C, KIRA Z, et al.A closer look at few-shot classification[EB/OL].[2020-05-03].http://arxiv.xilesou.top/abs/1904.04232.pdf. [20] RAVI S, LAROCHELLE H.Optimization as a model for few-shot learning[C]//Proceedings of the 5th International Conference on Learning Representations.Washington D.C., USA:IEEE Press, 2017:123-135. [21] FINN C, ABBEEL P, LEVINE S.Model-agnostic meta-learning for fast adaptation of deep networks[EB/OL].[2020-05-03].https://arxiv.org/pdf/1703.03400.pdf. |