[1] 屈路明. 武汉大学历史学院2008年度学术动态[J]. 历史教学问题, 2009(6): 104-106.
Qu, L.(2009).Academic Activities of Wuhan University’s History College in 2008.Historical Research Issues, 6, 104–106.
[2] LIU G, REDA F A, SHIH K J, et al. Image Inpainting for Irregular Holes Using Partial Convolutions[C/OL]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 85-100[2025-12-25]. https://openaccess.thecvf.com/content_ECCV_2018/html/Guilin_Liu_Image_Inpainting_for_ECCV_2018_paper.html.
[3] YAN Z, LI X, LI M, et al. Shift-Net: Image Inpainting via Deep Feature Rearrangement[C/OL]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 1-17[2025-12-25].https://openaccess.thecvf.com/content_ECCV_2018/html/Zhaoyi_Yan_Shift-Net_Image_Inpainting_ECCV_2018_paper.html.
[4] NAZERI K, NG E, JOSEPH T, et al. EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning[A/OL]. arXiv, 2019[2025-12-25]. http://arxiv.org/abs/1901.00212. DOI:10.48550/arXiv.1901.00212.
[5] RARES A, REINDERS M J T, BIEMOND J. Edge-based image restoration[J/OL]. IEEE Transactions on Image Processing, 2005, 14(10): 1454-1468. DOI:10.1109/TIP.2005.854466.
[6] GUO X, YANG H, HUANG D. Image Inpainting via Conditional Texture and Structure Dual Generation[C/OL]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 14134-14143[2025-12-25].https://openaccess.thecvf.com/content/ICCV2021/html/Guo_Image_Inpainting_via_Conditional_Texture_and_Structure_Dual_Generation_ICCV_2021_paper.html.
[7] LI Z, ZHANG Y, DU Y, et al. STNet: Structure and texture-guided network for image inpainting[J/OL]. Pattern Recognition, 2024, 156: 110786. DOI:10.1016/j.patcog.2024.110786.
[8] CHEN W, YUE H, WANG J, et al. An improved edge detection algorithm for depth map inpainting[J/OL]. Optics and Lasers in Engineering, 2014, 55: 69-77. DOI:10.1016/j.optlaseng.2013.10.025.
[9] WEI Z, MIN W, WANG Q, et al. ECNFP: Edge-constrained network using a feature pyramid for image inpainting[J/OL]. Expert Systems with Applications, 2022, 207: 118070. DOI:10.1016/j.eswa.2022.118070.
[10] CAO C, FU Y. Learning a Sketch Tensor Space for Image Inpainting of Man-Made Scenes[C/OL]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 14509-14518[2025-12-25]. https://openaccess.thecvf.com/content/ICCV2021/html/Cao_Learning_a_Sketch_Tensor_Space_for_Image_Inpainting_of_Man-Made_ICCV_2021_paper.html?ref=https://githubhelp.com.
[11] DONG Q, CAO C, FU Y. Incremental Transformer Structure Enhanced Image Inpainting With Masking Positional Encoding[C/OL]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022:11358-11368[2025-12-25].https://openaccess.thecvf.com/content/CVPR2022/html/Dong_Incremental_Transformer_Structure_Enhanced_Image_Inpainting_With_Masking_Positional_Encoding_CVPR_2022_paper.html.
[12] CAO C, DONG Q, FU Y. ZITS++: Image Inpainting by Improving the Incremental Transformer on Structural Priors[J/OL]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(10): 12667-12684. DOI:10.1109/TPAMI.2023.3280222.
[13] REN Y, YU X, ZHANG R, et al. StructureFlow: Image Inpainting via Structure-Aware Appearance Flow[C/OL]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 181-190[2025-12-25].https://openaccess.thecvf.com/content_ICCV_2019/html/Ren_StructureFlow_Image_Inpainting_via_Structure-Aware_Appearance_Flow_ICCV_2019_paper.html.
[14] LIU H, JIANG B, SONG Y, et al. Rethinking Image Inpainting via a Mutual Encoder-Decoder with Feature Equalizations[A/OL]. arXiv, 2020[2025-12-25]. http://arxiv.org/abs/2007.06929.DOI:10.48550/arXiv.2007.06929.
[15] DENG Y, HUI S, ZHOU S, et al. Context Adaptive Network for Image Inpainting[J/OL]. IEEE Transactions on Image Processing, 2023, 32: 6332-6345. DOI:10.1109/TIP.2023.3298560.
[16] ZHU M, HE D, LI X, et al. Image Inpainting by End-to-End Cascaded Refinement With Mask Awareness[J/OL]. IEEE Transactions on Image Processing, 2021, 30: 4855-4866. DOI:10.1109/TIP.2021.3076310.
[17] ISOGAWA M, MIKAMI D, IWAI D, et al. Mask Optimization for Image Inpainting[J/OL]. IEEE Access, 2018, 6: 69728-69741. DOI:10.1109/ACCESS.2018.2877401.
[18] YU J, LIN Z, YANG J, et al. Free-Form Image Inpainting With Gated Convolution[C/OL]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019:4471-4480[2025-12-25].https://openaccess.thecvf.com/content_ICCV_2019/html/Yu_Free-Form_Image_Inpainting_With_Gated_Convolution_ICCV_2019_paper.html.
[19] CHEN S, ATAPOUR-ABARGHOUEI A, SHUM H P H. HINT: High-Quality INpainting Transformer With Mask-Aware Encoding and Enhanced Attention[J/OL]. IEEE Transactions on Multimedia, 2024, 26: 7649-7660. DOI:10.1109/TMM.2024.3369897.
[20] MIAO W, WANG L, LU H, et al. ITrans: generative image inpainting with transformers[J/OL]. Multimedia Systems, 2024, 30(1): 21. DOI:10.1007/s00530-023-01211-w.
[21] NADERI M, GIVKASHI M, KARIMI N, et al. SFI-Swin: Symmetric Face Inpainting with Swin Transformer by Distinctly Learning Face Components Distributions[A/OL]. arXiv, 2023[2025-12-25]. http://arxiv.org/abs/2301.03130. DOI:10.48550/arXiv.2301.03130.
[22]Xing C, Ren Z. Binary inscription character inpainting based on improved context encoders[J]. IEEE Access, 2023, 11: 55834-55843.
[23] Zhu S, Fang P, Zhu C, et al. Text image inpainting via global structure-guided diffusion models[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(7): 7775-7783.
[24] Liu Y, Zhang E, Lin G, et al. A structural information-guided cross-modal method for damaged inscription inpainting via vision-language models[J]. npj Heritage Science, 2025, 13(1): 485.
[25] 陈善雄, 朱世宇, 熊海灵, 等. 一种双判别器GAN的古彝文字符修复方法[J/OL]. 自动化学报, 2022, 48(3): 853-864. DOI:10.16383/j.aas.c190752.
Chen Shan-Xiong, Zhu Shi-Yu, Xiong Hai-Ling,Zhao Fu-Jia, Wang Ding-Wang, Liu Yun. A method of inpainting ancient Yi characters based ondual discriminator generative adversarial networks.Acta Automatica Sinica, 2022, 48(3): 853−864 doi: 10.16383/j.aas.c190752.
[26] WENJUN Z, BENPENG S, RUIQI F, et al. EA-GAN: restoration of text in ancient Chinese books based on an example attention generative adversarial network[J/OL]. Heritage Science,2023,11(1): 42. DOI:10.1186/s40494-023-00882-y.
[27] 段荧, 龙华, 瞿于荃, 等. 基于部分卷积的文字图像不规则干扰修复算法研究[J]. 计算机工程与科学, 2021, 43(9): 1634-1644.
DUAN Ying, LONG Hua, QU Yu-quan, SHAO Yu-bin, DU Qing-zhi, . An irregular interference repair algorithm of text images based on partial convolution[J]. Computer Engineering & Science, 2021, 43(09): 1634-1644.
[28] 李超, 李思樵, 张靖熙, 等. 基于深度学习算法的碑文提取与修复系统[J]. 信息技术与信息化, 2024(10): 193-196.
Li, C., Li, S., Zhang, J., et al. (2024). Extraction and Restoration System of Epitaphs Based on Deep Learning Algorithms. Information Technology and Informatization, 10(10), 193-196.
[29] GULATI A, QIN J, CHIU C C, 等. Conformer: Convolution-augmented Transformer for Speech Recognition[A/OL].arXiv,2020[2025-12-25].http://arxiv.org/abs/2005.08100. DOI:10.48550/arXiv.2005.08100.
[30] 张兰云. 简牍文字提取与识别研究[D/OL]. 西北师范大学, 2018[2025-12-26]. https://kns.cnki.net/KCMS/detail/detail.aspx?dbcode=CMFD&dbname=CMFD201802&filename=1017199984.nh.
Zhang, L. (2017). Research on the extraction and recognition of bamboo slip characters [Master’s thesis, Northwest Normal University].
[31] PENG X, ZHAO H, WANG X, 等. C3N: content-constrained convolutional network for mural image completion[J/OL]. Neural Computing and Applications, 2023, 35(2): 1959-1970. DOI:10.1007/s00521-022-07806-0.
[32] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.
[33] Liu Z, Lin Y, Cao Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 10012-10022.
[34] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
[35] Zhang Y, Shi Y, Zhang P, et al. MegaHan97K: A large-scale dataset for mega-category Chinese character recognition with over 97K categories[J]. Pattern Recognition, 2025: 111757.
|