[1] GATYS L A, ECKER A S, BETHGE M. Image style transfer using convolutional neural networks[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2016:2414-2423. [2] 陈佳, 董学良,梁金星,等. 基于注意力机制的CycleGAN服装局部风格迁移研究[J]. 计算机工程, 2021, 47(11):305-312. CHEN J, DONG X L, LIANG J X, et al. Research on the local style transfer of clothing images by CycleGAN based on attention mechanism[J]. Computer Engineering, 2021, 47(11):305-312.(in Chinese) [3] ZHANG R, ISOLA P, EFROS A A. Colorful image colorization[C]//Proceedings of European Conference on Computer Vision. Berlin,Germany:Springer,2016:649-666. [4] 刘航, 普园媛,吕大华,等. 极化自注意力约束颜色溢出的图像自动上色[J]. 计算机科学, 2023, 50(3):208-215. LIU H, PU Y Y, LV D H, et al. Polarized self-attention constrains color overflow in automatic coloring of image[J]. Computer Science, 2023, 50(3):208-215.(in Chinese) [5] DONG C, LOY C C, HE K M, et al. Image super-resolution using deep convolutional networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(2):295-307. [6] 陈乔松, 蒲柳,张羽,等. 结合整体注意力与分形稠密特征的图像超分辨率重建[J]. 计算机工程, 2022, 48(11):207-214, 223. CHEN Q S, PU L, ZHANG Y, et al. Image super-resolution reconstruction combining holistic attention and fractal density feature[J]. Computer Engineering, 2022, 48(11):207-214, 223.(in Chinese) [7] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[EB/OL].[2023-03-05].http://www.arxiv.org/pdf/1406.2661v1.pdf. [8] ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of IEEE International Conference on Computer Vision. Washington D.C.,USA:IEEE Press,2017:2223-2232. [9] CHOI Y, UH Y, YOO J, et al. StarGANv2:diverse image synthesis for multiple domains[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2020:8188-8197. [10] BAEK K, CHOI Y, UH Y, et al. Rethinking the truly unsupervised image-to-image translation[C]//Proceedings of IEEE/CVF International Conference on Computer Vision.Washington D.C.,USA:IEEE Press,2021:14154-14163. [11] LEE H, SEOL J, LEE S G. Contrastive learning for unsupervised image-to-image translation[EB/OL].[2023-03-01].https://arxiv.org/abs/2105.03117. [12] RADFORD A, KIM J W, HALLACY C, et al. Learning transferable visual models from natural language supervision[C]//Proceedings of 2021 International Conference on Machine Learning.New York,USA:ACM Press,2021:8748-8763. [13] LIU M Y, BREUEL T, KAUTZ J. Unsupervised image-to-image translation networks[EB/OL].[2023-03-01]. http://arxiv.org/abs/1703.00848. [14] PARK T, EFROS A A, ZHANG R, et al. Contrastive learning for unpaired image-to-image translation[C]//Proceedings of European Conference on Computer Vision. Berlin,Germany:Springer,2020:319-345. [15] WANG W L, ZHOU W G, BAO J M, et al. Instance-wise hard negative example generation for contrastive learning in unpaired image-to-image translation[C]//Proceedings of IEEE/CVF International Conference on Computer Vision. Washington D.C.,USA:IEEE Press,2021:14020-14029. [16] CHOI Y, CHOI M, KIM M, et al. StarGAN:unified generative adversarial networks for multi-domain image-to-image translation[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2018:8789-8797. [17] PATASHNIK O, WU Z, SHECHTMAN E, et al. StyleCLIP:text-driven manipulation of stylegan imagery[C]//Proceedings of 2021 IEEE International Conference on Computer Vision. Washington D.C.,USA:IEEE Press,2021:2085-2094. [18] KARRAS T, LAINE S, AILA T M. A style-based generator architecture for generative adversarial networks[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2019:4401-4410. [19] HUANG X, BELONGIE S. Arbitrary style transfer in real-time with adaptive instance normalization[C]//Proceedings of IEEE International Conference on Computer Vision. Washington D.C.,USA:IEEE Press,2017:1501-1510. [20] PARK T, LIU M Y, WANG T C, et al. Semantic image synthesis with spatially-adaptive normalization[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press, 2019:2337-2346. [21] LUO X, HAN Z, YANG L, et al. Consistent style transfer[EB/OL].[2023-03-01].https://arxiv.org/abs/2201.02233. [22] KARRAS T, AILA T, LAINE S, et al. Progressive growing of GANs for improved quality, stability, and variation[C]//Proceedings of 2018 International Conference on Learning Representations. Washington D.C.,USA:IEEE Press,2018:123-156. [23] KIM J, KIM M, KANG H, et al. U-GAT-IT:unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation[C]//Proceedings of 2019 International Conference on Learning Representations. Washington D.C.,USA:IEEE Press,2019:23-29. [24] HEUSEL M, RAMSAUER H, UNTERTHINER T, et al. Gans trained by a two time-scale update rule converge to a local nash equilibrium[EB/OL].[2023-03-01].https://arxiv.org/abs/1706.08500. [25] BI$KOWSKI M, SUTHERLAND D J, ARBEL M, et al. Demystifying MMD GANs[C]//Proceedings of 2018 International Conference on Learning Representations. Washington D.C.,USA:IEEE Press,2018:145-165. [26] DENG J, DONG W, SOCHER R, et al. ImageNet:a large-scale hierarchical image database[C]//Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2009:248-255. [27] GULRAJANI I, AHMED F, ARJOVSKY M, et al. Improved training of Wasserstein GANs[EB/OL].[2023-03-01].https://arxiv.org/pdf/1704.00028.pdf. [28] SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the Inception architecture for computer vision[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2016:2818-2826. [29] KINGMA D P, BA J. Adam:a method for stochastic optimization[EB/OL].[2023-03-01].https://arxiv.org/abs/1412.6980v6. [30] KIM K, PARK S, JEON E, et al. A style-aware discriminator for controllable image translation[C]//Proceedings of 2022 IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2022:18239-18248. |