| 1 |
HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2016: 770-778.
|
| 2 |
|
| 3 |
CHINA S J U , MUHAMMAD I , YAN Z , et al. Supervised machine learning approaches: a survey. International Journal of Soft Computing, 2015, 5 (3): 946- 952.
|
| 4 |
DIKE H U, ZHOU Y M, DEVEERASETTY K K, et al. Unsupervised learning based on artificial neural network: a review[C]//Proceedings of the IEEE International Conference on Cyborg and Bionic Systems (CBS). Washington D.C., USA: IEEE Press, 2018: 322-327.
|
| 5 |
ZHANG X Y , LIU C L , SUEN C Y . Towards robust pattern recognition: a review. Proceedings of the IEEE, 2020, 108 (6): 894- 922.
doi: 10.1109/JPROC.2020.2989782
|
| 6 |
|
| 7 |
MCCLOSKEY M , COHEN N J . Catastrophic interference in connectionist networks: the sequential learning problem. Psychology of Learning and Motivation, 1989, 24, 109- 165.
|
| 8 |
RATCLIFF R . Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. Psychological Review, 1990, 97 (2): 285- 308.
doi: 10.1037/0033-295X.97.2.285
|
| 9 |
FRENCH R M . Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences, 1999, 3 (4): 128- 135.
doi: 10.1016/S1364-6613(99)01294-2
|
| 10 |
ROBINS A . Catastrophic forgetting, rehearsal and pseudo rehearsal. Connection Science, 1995, 7 (2): 123- 146.
doi: 10.1080/09540099550039318
|
| 11 |
WICKRAMASINGHE B , SAHA G , ROY K . Continual learning: a review of techniques, challenges, and future directions. IEEE Transactions on Artificial Intelligence, 2024, 5 (6): 2526- 2546.
doi: 10.1109/TAI.2023.3339091
|
| 12 |
FRENCH R M . Semi-distributed representations and catastrophic forgetting in connectionist networks. Connection Science, 1992, 4 (3/4): 365- 377.
|
| 13 |
ROBINS A. Catastrophic forgetting in neural networks: the role of rehearsal mechanisms[C]//Proceedings of the 1st New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems. Washington D.C., USA: IEEE Press, 1993: 65-68.
|
| 14 |
WANG L Y , ZHANG X X , SU H , et al. A comprehensive survey of continual learning: theory, method and application. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46 (8): 5362- 5383.
doi: 10.1109/TPAMI.2024.3367329
|
| 15 |
LEE C S , LEE A Y . Clinical applications of continual learning machine learning. The Lancet Digital Health, 2020, 2 (6): 279- 281.
doi: 10.1016/S2589-7500(20)30102-3
|
| 16 |
GRAFFIETI G , BORGHI G , MALTONI D . Continual learning in real-life applications. IEEE Robotics and Automation Letters, 2022, 7 (3): 6195- 6202.
doi: 10.1109/LRA.2022.3167736
|
| 17 |
张文卓, 崔家宝, 孙毅, 等. 持续学习的研究进展及在无人平台中的应用. 无人系统技术, 2024, 7 (2): 1- 13.
|
|
ZHANG W Z , CUI J B , SUN Y , et al. Recent advances in continual learning and application of unmanned platforms. Unmanned Systems Technology, 2024, 7 (2): 1- 13.
|
| 18 |
MAI Z D , LI R W , JEONG J , et al. Online continual learning in image classification: an empirical survey. Neurocomputing, 2022, 469, 28- 51.
doi: 10.1016/j.neucom.2021.10.021
|
| 19 |
BIESIALSKA M, BIESIALSKA K, COSTA-JUSSÀ M R. Continual lifelong learning in natural language processing: a survey[EB/OL]. [2024-10-08]. https://arxiv.org/abs/2012.09823.
|
| 20 |
|
| 21 |
KHARRAT A, DRIRA F, LEBOURGEOIS F, et al. Advancements and challenges in continual learning for natural language processing: insights and future prospects[C]//Proceedings of the 16th International Conference on Agents and Artificial Intelligence. Rome, Italy: Science and Technology Publications, 2024: 1255-1262.
|
| 22 |
DE LANGE M , ALJUNDI R , MASANA M , et al. A continual learning survey: defying forgetting in classification tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44 (7): 3366- 3385.
|
| 23 |
BROWN T B, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems. Washington D.C., USA: IEEE Press, 2020: 1877-1901.
|
| 24 |
RADFORD A, KIM J W, HALLACY C, et al. Learning transferable visual models from natural language supervision[C]//Proceedings of International Conference on Machine Learning. Washington D.C., USA: IEEE Press, 2021: 8748-8763.
|
| 25 |
DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words: Transformers for image recognition at scale[EB/OL]. [2024-10-08]. https://arxiv.org/abs/2010.11929.
|
| 26 |
JIA M L , TANG L M , CHEN B C , et al. Visual prompt tuning. Berlin, Germany: Springer, 2022.
|
| 27 |
|
| 28 |
|
| 29 |
|
| 30 |
ZHANG S Z , KONG D X , XING Y H , et al. Frequency-guided spatial adaptation for camouflaged object detection. IEEE Transactions on Multimedia, 2025, 27, 72- 83.
doi: 10.1109/TMM.2024.3521681
|
| 31 |
XING Y H , WU Q R , CHENG D , et al. Dual modality prompt tuning for vision-language pre-trained model. IEEE Transactions on Multimedia, 2023, 26, 2056- 2068.
|
| 32 |
DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional Transformers for language understanding[C]//Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1. Washington D.C., USA: IEEE Press, 2019: 4171-4186.
|
| 33 |
COLIN R , NOAM S , ADAM R , et al. Exploring the limits of transfer learning with a unified text-to-text Transformer. Journal of Machine Learning Research, 2020, 21 (140): 1- 67.
|
| 34 |
LIU Z, LIN Y T, CAO Y, et al. Swin Transformer: hierarchical vision Transformer using shifted windows[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C., USA: IEEE Press, 2021: 9992-10002.
|
| 35 |
CARION N, MASSA F, SYNNAEVE G, et al. End-to-end object detection with Transformers[C]//Proceedings of ECCV'20. Berlin, Germany: Springer International Publishing, 2020: 213-229.
|
| 36 |
TOUVRON H, CORD M, DOUZE M, et al. Training data-efficient image Transformers & distillation through attention[EB/OL]. [2024-10-08]. https://arxiv.org/abs/2012.12877.
|
| 37 |
KIRILLOV A, MINTUN E, RAVI N, et al. Segment anything[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C., USA: IEEE Press, 2023: 3992-4003.
|
| 38 |
ROMBACH R, BLATTMANN A, LORENZ D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2022: 10674-10685.
|
| 39 |
|
| 40 |
|
| 41 |
|
| 42 |
|
| 43 |
ESSER P, KULAL S, BLATTMANN A, et al. Scaling rectified flow Transformers for high-resolution image synthesis[C]//Proceedings of the 41st International Conference on Machine Learning. New York, USA: ACM Press, 2024: 12633.
|
| 44 |
HAN Z Y, GAO C, LIU J Y, et al. Parameter-efficient fine-tuning for large models: a comprehensive survey[EB/OL]. [2024-10-08]. https://arxiv.org/abs/2403.14608.
|
| 45 |
DING N , QIN Y J , YANG G , et al. Parameter-efficient fine-tuning of large-scale pre-trained language models. Nature Machine Intelligence, 2023, 5, 220- 235.
doi: 10.1038/s42256-023-00626-4
|
| 46 |
朱飞, 张煦尧, 刘成林. 类别增量学习研究进展和性能评价. 自动化学报, 2023, 49 (3): 635- 660.
|
|
ZHU F , ZHANG X Y , LIU C L . Class incremental learning: a review and performance evaluation. Acta Automatica Sinica, 2023, 49 (3): 635- 660.
|
| 47 |
韩亚楠, 刘建伟, 罗雄麟. 连续学习研究进展. 计算机研究与发展, 2022, 59 (6): 1213- 1239.
|
|
HAN Y N , LIU J W , LUO X L . Research progress of continual learning. Journal of Computer Research and Development, 2022, 59 (6): 1213- 1239.
|
| 48 |
杨静, 李斌, 李少波, 等. 脑启发式持续学习方法: 技术、应用与发展. 电子与信息学报, 2022, 44 (5): 1865- 1878.
|
|
YANG J , LI B , LI S B , et al. Brain-inspired continuous learning: technology, application and future. Journal of Electronics & Information Technology, 2022, 44 (5): 1865- 1878.
|
| 49 |
VAN DE VEN G M , TUYTELAARS T , TOLIAS A S . Three types of incremental learning. Nature Machine Intelligence, 2022, 4 (12): 1185- 1197.
doi: 10.1038/s42256-022-00568-3
|
| 50 |
ZHAO W , CHELLAPPA R , PHILLIPS P J , et al. Face recognition. ACM Computing Surveys, 2003, 35 (4): 399- 458.
doi: 10.1145/954339.954342
|
| 51 |
XIN Y, LUO S Q, ZHOU H D, et al. Parameter-efficient fine-tuning for pre-trained vision models: a survey[EB/OL]. [2024-10-08]. https://arxiv.org/abs/2402.02242.
|
| 52 |
XU L, XIE H, QIN S Z J, et al. Parameter-efficient fine-tuning methods for pretrained language models: a critical review and assessment[EB/OL]. [2024-10-08]. https://arxiv.org/abs/2312.12148.
|
| 53 |
JIE S B, DENG Z H, CHEN S X, et al. Convolutional bypasses are better vision Transformer adapters[M]. [S. l. ]: IOS Press, 2024.
|
| 54 |
KORNBLITH S, SHLENS J, LE Q V. Do better ImageNet models transfer better?[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2019: 2661-2671.
|
| 55 |
|
| 56 |
WANG Z F, ZHANG Z Z, LEE C Y, et al. Learning to prompt for continual learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2022: 139-149.
|
| 57 |
WANG Z F, ZHANG Z Z, EBRAHIMI S, et al. DualPrompt: complementary prompting for rehearsal-free continual learning[C]//Proceedings of ECCV'22. Berlin, Germany: Springer, 2022: 631-648.
|
| 58 |
WANG Y, HUANG Z, HONG X. S-Prompts learning with pre-trained Transformers: an Occam's razor for domain incremental learning[EB/OL]. [2024-10-08]. https://arxiv.org/abs/2207.12819.
|
| 59 |
LI Z W, ZHAO L, ZHANG Z Z, et al. Steering prototypes with prompt-tuning for rehearsal-free continual learning[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Washington D.C., USA: IEEE Press, 2024: 2511-2521.
|
| 60 |
SMITH J S, KARLINSKY L, GUTTA V, et al. CODA-Prompt: continual decomposed attention-based prompting for rehearsal-free continual learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2023: 11909-11919.
|
| 61 |
JUNG D, HAN D, BANG J, et al. Generating instance-level prompts for rehearsal-free continual learning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C., USA: IEEE Press, 2023: 11813-11823.
|
| 62 |
ROY A, MOULICK R, VERMA V K, et al. Convolutional prompting meets language models for continual learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2024: 23616-23626.
|
| 63 |
KURNIAWAN M R, SONG X, MA Z H, et al. Evolving parameterized prompt memory for continual learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI Press, 2024: 13301-13309.
|
| 64 |
GAO Z X, CEN J, CHANG X B. Consistent prompting for rehearsal-free continual learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2024: 28463-28473.
|
| 65 |
TANG Y M, PENG Y X, ZHENG W S. When prompt-based incremental learning does not meet strong pretraining[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C., USA: IEEE Press, 2023: 1706-1716.
|
| 66 |
WANG L, XIE J, ZHANG X, et al. Hierarchical decomposition of prompt-based continual learning: rethinking obscured sub-optimality[C]//Proceedings of the 37th International Conference on Neural Information Processing Systems. New York, USA: ACM Press, 2023: 69054-69076.
|
| 67 |
KHAN M G Z A, NAEEM M F, VAN GOOL L, et al. Introducing language guidance in prompt-based continual learning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C., USA: IEEE Press, 2023: 11429-11439.
|
| 68 |
WANG R Q, DUAN X Y, KANG G L, et al. AttriCLIP: a non-incremental learner for incremental knowledge learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2023: 3654-3663.
|
| 69 |
ZHANG G W, WANG L Y, KANG G L, et al. SLCA: slow learner with classifier alignment for continual learning on a pre-trained model[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C., USA: IEEE Press, 2023: 19091-19101.
|
| 70 |
GAO Q K, ZHAO C, SUN Y F, et al. A unified continual learning framework with general parameter-efficient tuning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C., USA: IEEE Press, 2023: 11449-11459.
|
| 71 |
LIANG Y S, LI W J. InfLoRA: interference-free low-rank adaptation for continual learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2024: 23638-23647.
|
| 72 |
LIANG Y S, LI W J. Adaptive plasticity improvement for continual learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2023: 7816-7825.
|
| 73 |
BOWMAN B, ACHILLE A, ZANCATO L, et al. À-la-carte Prompt Tuning (APT): combining distinct data via composable prompting[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2023: 14984-14993.
|
| 74 |
TAN Y W, ZHOU Q H, XIANG X, et al. Semantically-shifted incremental adapter-tuning is a continual ViTransformer[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2024: 23252-23262.
|
| 75 |
ZHOU D W, SUN H L, YE H J, et al. Expandable subspace ensemble for pre-trained model-based class-incremental learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2024: 23554-23564.
|
| 76 |
YU J Z, ZHUGE Y Z, ZHANG L, et al. Boosting continual learning of vision-language models via mixture-of-experts adapters[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2024: 23219-23230.
|
| 77 |
WANG S P, LI X R, SUN J, et al. Training networks in null space of feature covariance for continual learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2021: 184-193.
|
| 78 |
LI D P, WANG T Q, CHEN J W, et al. Towards continual learning desiderata via HSIC-bottleneck orthogonalization and equiangular embedding[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI Press, 2024: 13464-13473.
|
| 79 |
GRETTON A , BOUSQUET O , SMOLA A , et al. Measuring statistical dependence with Hilbert-Schmidt norms. Berlin, Germany: Springer, 2005.
|
| 80 |
廖丁丁, 刘俊峰, 曾君, 等. 一种基于块平均正交权重修正的连续学习算法. 计算机工程, 2025, 51 (6): 57- 64.
doi: 10.19678/j.issn.1000-3428.0069310
|
|
LIAO D D , LIU J F , ZENG J , et al. A continuous learning algorithm based on block average and orthogonal weight modified. Computer Engineering, 2025, 51 (6): 57- 64.
doi: 10.19678/j.issn.1000-3428.0069310
|
| 81 |
ZENG G X , CHEN Y , CUI B , et al. Continual learning of context-dependent processing in neural networks. Nature Machine Intelligence, 2019, 1, 364- 372.
doi: 10.1038/s42256-019-0080-x
|
| 82 |
WANG Y B, MA Z H, HUANG Z W, et al. Isolation and impartial aggregation: a paradigm of incremental learning without interference[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI Press, 2023: 10209-10217.
|
| 83 |
SCELLIER B, ERNOULT M, KENDALL J, et al. Energy-based learning algorithms: a comparative study[C]//Proceedings of ICML Workshop on Localized Learning. Washington D.C., USA: IEEE Press, 2023: 1-10.
|
| 84 |
MCDONNELL M D, GONG D, PARVANEH A, et al. RanPAC: random projections and pre-trained models for continual learning[C]//Proceedings of the 37th International Conference on Neural Information Processing Systems. New York, USA: ACM Press, 2023: 12022-12053.
|
| 85 |
HUANG W C, CHEN C F, HSU H. OVOR: OnePrompt with virtual outlier regularization for rehearsal-free class-incremental learning[EB/OL]. [2024-10-08]. https://arxiv.org/abs/2402.04129.
|
| 86 |
|
| 87 |
HENDRYCKS D, BASART S, MU N, et al. The many faces of robustness: a critical analysis of out-of-distribution generalization[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C., USA: IEEE Press, 2021: 8320-8329.
|
| 88 |
PENG X C, BAI Q X, XIA X D, et al. Moment matching for multi-source domain adaptation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C., USA: IEEE Press, 2019: 1406-1415.
|
| 89 |
DENG J, DONG W, SOCHER R, et al. ImageNet: a large-scale hierarchical image database[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C., USA: IEEE Press, 2009: 248-255.
|
| 90 |
|
| 91 |
EBRAHIMI S , MEIER F , CALANDRA R , et al. Adversarial continual learning. Berlin, Germany: Springer International Publishing, 2020.
|
| 92 |
LI C Q, HUANG Z W, PAUDEL D P, et al. A continual deepfake detection benchmark: dataset, methods, and essentials[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Washington D.C., USA: IEEE Press, 2023: 1339-1349.
|
| 93 |
|
| 94 |
KIRKPATRICK J , PASCANU R , RABINOWITZ N , et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Science, 2017, 114 (13): 3521- 3526.
doi: 10.1073/pnas.1611835114
|
| 95 |
LI Z Z , HOIEM D . Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40 (12): 2935- 2947.
doi: 10.1109/TPAMI.2017.2773081
|
| 96 |
DHAR S , GUO J Y , LIU J J , et al. A survey of on-device machine learning. ACM Transactions on Internet of Things, 2021, 2 (3): 1- 49.
|
| 97 |
JABEEN S , LI X , AMIN M S , et al. A review on methods and applications in multimodal deep learning. ACM Transactions on Multimedia Computing, Communications, and Applications, 2023, 19 (2): 1- 41.
|
| 98 |
ZHANG S Z, LUO W L, CHENG D, et al. Cross-platform video person ReID: a new benchmark dataset and adaptation approach[C]//Proceedings of ECCV'25. Berlin, Germany: Springer, 2025: 270-287.
|
| 99 |
|
| 100 |
SONG Y S , WANG T , CAI P Y , et al. A comprehensive survey of few-shot learning: evolution, applications, challenges, and opportunities. ACM Computing Surveys, 2023, 55 (13): 1- 40.
|
| 101 |
NODET P, LEMAIRE V, BONDU A, et al. From weakly supervised learning to biquality learning: an introduction[C]//Proceedings of the International Joint Conference on Neural Networks (IJCNN). Washington D.C., USA: IEEE Press, 2021: 1-10.
|