[1] 张东阳,陆子轩,刘军民,等.深度模型的持续学习综述:理
论、方法和应用[J].电子与信息学报,2024,46(10):3849-3
878
Zhang Dongyang, Lu Zixuan, Liu Junmin, et al. A su
rvey on continual learning of deep models: theories,
methods, and applications [J]. Journal of Electronics
& Information Technology, 2024, 46(10): 3849-3878 [2] Chen Zhiyuan, Liu Bing. Lifelong machine learning[J].
Synthesis Lectures on Artificial Intelligence and Machine
Learning, 2016, 10(3): 1-145
[3] Kudithipudi D, Aguilar-Simon M, Babb J, et al. Biological
underpinnings for lifelong learning machines[J]. Nature
Machine Intelligence, 2022, 4(3): 196-210
[4] 韩亚楠,刘建伟,罗雄麟.连续学习研究进展[J].计算机研
究与发展, 2022, 59(06): 1213-1239
Han Yanan, Liu Jianwei, Luo Xionglin. Research progress
in continuous learning[J]. Journal of Computer Research
and Development, 2022, 59(06): 1213-1239
[5] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a
neural network[J]. Computer Science, 2015, 14(7): 38-39
[6] Li Zhizhong, Hoiem D. Learning without forgetting[J].
IEEE Transactions on Pattern Analysis & Machine
Intelligence, 2017, 40(12):2935–2947
[7] Castro F M, Marín-Jiménez M J, Guil N, et al. End-to-end
incremental learning[C]//Proc of the European Conf on
Computer Vision (ECCV). Berlin: Springer, 2018: 233-248
[8] Rebuffi S A, Kolesnikov A, Sperl G, et al. Icarl:
incremental classifier and representation learning[C]//Proc
of the IEEE/CVF Conf on Computer Vision and Pattern
Recognition. Piscataway, NJ: IEEE, 2017: 2001-2010
[9] Masana M, Liu Xialei, Twardowski B, et al.
Class-incremental learning: survey and performance
evaluation on image classification[J]. IEEE Transactions
on Pattern Analysis & Machine Intelligence, 2022 (01):
1-20
[10] Lee K, Lee K, Shin J, et al. Overcoming catastrophic
forgetting with unlabeled data in the wild[C]//Proc of the
IEEE/CVF Int Conf on Computer Vision(ICCV).
Piscataway, NJ: IEEE, 2019: 312-321
[11] Zhang Junting, Zhang Jie, Ghosh S, et al.
Class-incremental learning via deep model
consolidation[C]//Proceedings of the IEEE/CVF Winter
Conf on Applications of Computer Vision. Piscataway,
NJ:IEEE, 2020: 1131-1140
[12] Hou Saihui, Pan Xinyu, Loy C C, et al. Learning a unified
classifier incrementally via rebalancing[C]//Proc of the
IEEE/CVF Conf on Computer Vision and Pattern
Recognition. Piscataway, NJ: IEEE, 2019: 831-839
[13] Kang M, Park J, Han B. Class-incremental learning by
knowledge distillation with adaptive feature
consolidation[C]// Proc of the IEEE/CVF Conf on
Computer Vision and Pattern Recognition. Piscataway, NJ:
IEEE, 2022: 16071-16080
[14] Liu Yu, Hong Xiaopeng, Tao Xiaoyu, et al. Model
behavior preserving for class-incremental learning[J].
IEEE Transactions on Neural Networks and Learning
Systems, 2022.
[15] Peng, Can et al. “Few-Shot Class-Incremental Learning
from an Open-Set Perspective.” European Conference on
Computer Vision (2022).
[16] Xue, Hui et al. “Towards Few-Shot Learning in the Open
World: A Review and Beyond.” ArXiv abs/2408.09722
(2024): n. pag.
[17] Zhao, Linglan et al. “Few-Shot Class-Incremental Learning
via Class-Aware Bilateral Distillation.” 2023 IEEE/CVF
Conference on Computer Vision and Pattern Recognition
(CVPR) (2023): 11838-11847.
[18] Lin, Jinhao et al. “M2SD: Multiple Mixing Self-Distillation
for Few-Shot Class-Incremental Learning.” AAAI
Conference on Artificial Intelligence (2024).
[19] Kirkpatrick J, Pascanu R, Rabinowitz N, et al.
Overcoming catastrophic forgetting in neural networks[J].
Proc of the National Academy of Sciences, 2017,
114(13): 3521-3526
[20] Wu, Y., Huang, L., Wang, R., Meng, D., & Wei, Y. (2024).
Meta Continual Learning Revisited: Implicitly Enhancing
Online Hessian Approximation via Variance Reduction.
International Conference on Learning Representations.
[21] Lin, Huiwei et al. “PCR: Proxy-Based Contrastive Replay
for Online Class-Incremental Continual Learning.” 2023
IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR) (2023): 24246-24255.
[22] Ostapenko O, Puscas M, Klein T, et al. Learning to
remember: a synaptic plasticity driven framework for
continual learning[C]//Proc of the IEEE/CVF Conf on
Computer Vision and Pattern Recognition. Piscataway, NJ:
IEEE, 2019: 11321–11329
[23] Wang Liyuan, Yang Kuo, Li Chongxuan, et al. Ordisco:
effective and efficient usage of incremental unlabeled data
for semi-supervised continual learning[C]//Proc of the
IEEE/CVF Conf on Computer Vision and PatternRecognition. Piscataway, NJ: IEEE, 2021: 5383–5392
[24] Zhu Fei, Zhang Xuyao, Wang Chuang, et al. Prototype
augmentation and self-supervision for incremental
learning[C]//Proc of the IEEE/CVF Conf on Computer
Vision and Pattern Recognition. Piscataway, NJ: IEEE,
2021: 5871–5880
[25] 朱飞,张煦尧, 刘成林. 类别增量学习研究进展和性能
评价. 自动化学报, 2023, 49(3): 1−26)
Zhu Fei, Zhang Xuyao, Liu Chenglin. Class incremental
learning: a review and performance evaluation[J]. Acta
Automatica Sinica, 2023, 49(3): 1-26
[26] Li Xilai, Zhou Yingbo, Wu Tianfu, et al. Learn to grow: a
continual structure learning framework for overcoming
catastrophic forgetting[C]// Proc of the Int Conf on
Machine Learning. New York: ACM, 2019: 3925–3934
[27] Yan Shipeng, Xie Jiangwei, He Xuming. Der: dynamically
expandable representation for class incremental
learning[C]//Proc of the IEEE/CVF Conf on Computer
Vision and Pattern Recognition. Piscataway, NJ: IEEE,
2021: 3014–3023
[28] Zhou Dawei, Wang Qiwei, Ye Hanjia, et al. A model or
603 exemplars: towards memory-efficient
class-incremental learning[C/OL]//Proc of the Int Conf on
Learning Representations. Amsterdam: Elsevier, 2023
[2023-03-19]. https://arxiv.org/pdf/2205.13218.pdf
[29] Zhao Hanbin, Fu Yongjian, Kang Mintong, ey al. Mgsvf:
multi-grained slow vs. fast framework for few-shot
class-incremental learning[J]. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 2021
[30] Douillard A, Ramé A, Couairon G, et al. Dytox:
transformers for continual learning with dynamic token
expansion[C]//Proc of the IEEE/CVF Conf on Computer
Vision and Pattern Recognition. Piscataway, NJ: IEEE,
2022: 9285–9295
[31] Wang Zifeng, Zhang Zizhao, Li Chenyu, et al. Learning to
prompt for continual learning[C]//Proc of the IEEE/CVF
Conf on Computer Vision and Pattern Recognition.
Piscataway, NJ: IEEE, 2022: 139–149
[32] Mittal S, Galesso S, Brox T. Essentials for class
incremental learning[C]//Proc of the IEEE/CVF Conf on
Computer Vision and Pattern Recognition. Piscataway, NJ:
IEEE, 2021: 3513–3522
[33] Cubuk E D, Zoph B, Mane D, et al. Autoaugment: learning
augmentation strategies from data[C]//Proc of the
IEEE/CVF Conf on Computer Vision and Pattern
Recognition. Piscataway, NJ: IEEE, 2019: 113–123
[34] Furlanello T, Lipton Z, Tschannen M, et al. Born again
neural networks[C]//Proc of the Int Conf on Machine
Learning, New York: ACM, 2018: 1607-1616
[35] Shi Yujun, Zhou Kuangqi, Liang Jian, et al. Mimicking the
oracle: an initial phase decorrelation approach for class
incremental learning[C]//Proc of the IEEE/CVF Conf on
Computer Vision and Pattern Recognition. Piscataway, NJ:
IEEE, 2022: 16722-16731
[36] Kurmi V K, Patro B N, Subramanian V K, et al. Do not
forget to attend to uncertainty while mitigating
catastrophic forgetting[C]//Proc of the IEEE/CVF Winter
Conference on Applications of Computer Vision. 2021:
736-745
[37] Krishnan R, Tickoo O. Improving model calibration w
ith accuracy versus uncertainty optimization[J]. Neural
Information Processing Systems, 2020, 33: 18237-182
48
[38] Krizhevsky A, Hinton G. Learning multiple layers of f
eatures from tiny images[J]. Handbook of Systemic A
utoimmune Diseases, 2009, 1(4): 1-10
[39] Netzer Y, Wang Tao, Coates A, et al. Reading digits
in natural images with unsupervised feature learning[C
/OL]//Proc of NIPS Workshop on Deep Learning and
Unsupervisied Feature Learning. Cambridge, MA: MIT
Press, 2011[2020-12-11]. http://ufldl.stanford.edu/house
numbers/nips2011_housenumbers.pdf
[40] He K, Zhang X, Ren S, et al. Deep residual learning
for image recognition[C]//Proc of the IEEE Conf on
Computer Vision and Pattern Recognition. Piscataway,
NJ: IEEE, 2016: 770-778
[41] Paszke A, Gross S, Massa F, et al. Pytorch: An impe
rative style, high-performance deep learning library[C/
OL]//Conf on Neural Information Processing Systems.
Cambridge, MA: MIT Press, 2019[2023-03-19].https://p
apers.nips.cc/paper_files/paper/2019/file/bdbca288fee7f92
f2bfa9f7012727740-Paper.pdf
[42] Guo C, Pleiss G, Sun Y, et al. On calibration of mod
ern neural networks[C]//Int Conf on Machine Learning.
New York: ACM, 2017: 1321-13
|