1 |
GATYS L A, ECKER A S, BETHGE M. Image style transfer using convolutional neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2016: 2414-2423.
|
2 |
|
3 |
|
4 |
LI Y, FANG C, YANG J, et al. Universal style transfer via feature transforms[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. New York, USA: ACM Press, 2017: 385-395.
URL
|
5 |
CAMPBELL N D F , KAUTZ J . Learning a manifold of fonts. ACM Transactions on Graphics, 2014, 33 (4): 1- 11.
|
6 |
GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. New York, USA: ACM Press, 2014: 2672-2680.
|
7 |
YANG S , LIU J Y , WANG W J , et al. TET-GAN: text effects transfer via stylization and destylization. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33 (1): 1238- 1245.
doi: 10.1609/aaai.v33i01.33011238
|
8 |
LI C, WAND M. Precomputed real-time texture synthesis with Markovian generative adversarial networks[C]//Proceedings of ECCV 2016. Berlin, Germany: Springer, 2016: 702-716.
URL
|
9 |
HUANG X, BELONGIE S. Arbitrary style transfer in real-time with adaptive instance normalization[C]//Proceedings of the IEEE International Conference on Computer Vision. Washington D. C., USA: IEEE Press, 2017: 1501-1510.
URL
|
10 |
LIU M Y, HUANG X, MALLYA A, et al. Few-shot unsupervised image-to-image translation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Washington D. C., USA: IEEE Press, 2019: 10551-10560.
URL
|
11 |
KARRAS T, LAINE S, AILA T M. A style-based generator architecture for generative adversarial networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2019: 4401-4410.
URL
|
12 |
|
13 |
KALISCHEK N, WEGNER J D, SCHINDLER K. In the light of feature distributions: moment matching for neural style transfer[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2021: 9382-9391.
URL
|
14 |
|
15 |
|
16 |
让孝迪. 基于生成对抗网络的无监督艺术图像风格迁移[D]. 烟台: 烟台大学, 2023.
URL
|
|
RANG X D. Unsupervised art image style transfer based on generative confrontation network[D]. Yantai: Yantai University, 2023. (in Chinese)
|
17 |
过劲. 基于生成对抗网络的艺术风格图像迁移研究[D]. 南昌: 南昌大学, 2023.
URL
|
|
GUO J. Research on image migration of artistic style based on generative confrontation network[D]. Nanchang: Nanchang University, 2023. (in Chinese)
|
18 |
TOGO R , KOTERA M , OGAWA T , et al. Text-guided style transfer-based image manipulation using multimodal generative models. IEEE Access, 2021, 9, 64860- 64870.
doi: 10.1109/ACCESS.2021.3069876
|
19 |
CHEN H B, ZHAO L, ZHANG H M, et al. Diverse image style transfer via invertible cross-space mapping[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Washington D. C., USA: IEEE Press, 2021: 14860-14869.
URL
|
20 |
CAO J . Hierarchical-based calligraphy style transfer. World Scientific Research Journal, 2021, 7 (5): 430- 439.
doi: 10.6911/WSRJ.202105_7(5).0048
|
21 |
LI W , HE Y X , QI Y W , et al. FET-GAN: font and effect transfer via K-shot adaptive instance normalization. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34 (2): 1717- 1724.
doi: 10.1609/aaai.v34i02.5535
|
22 |
ZHANG H, GOODFELLOW I, METAXAS D, et al. Self-attention generative adversarial networks[C]//Proceedings of International Conference on Machine Learning. [S. l. ]: PMLR, 2019: 7354-7363.
URL
|
23 |
ISOLA P, ZHU J Y, ZHOU T H, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2017: 1125-1134.
|
24 |
|
25 |
MECHREZ R, TALMI I, ZELNIK-MANOR L. The contextual loss for image transformation with non-aligned data[C]//Proceedings of the European Conference on Computer Vision. Berlin, Germany: Springer, 2018: 768-783.
URL
|
26 |
|
27 |
MESCHEDER L, GEIGER A, NOWOZIN S. Which training methods for GANs do actually converge[C]//Proceedings of International Conference on Machine Learning. [S. l. ]: PMLR, 2018: 3481-3490.
URL
|
28 |
ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE International Conference on Computer Vision. Washington D. C., USA: IEEE Press, 2017: 2223-2232.
URL
|
29 |
CHOI Y, UH Y, YOO J, et al. StarGAN v2: diverse image synthesis for multiple domains[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2020: 8188-8197.
URL
|
30 |
KIM J, KIM M, KANG H, et al. U-GAT-IT: unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation[EB/OL]. [2023-05-20]. https://arxiv.org/abs/1907.10830.
URL
|
31 |
HORE A, ZIOU D. Image quality metrics: PSNR vs. SSIM[C]//Proceedings of the 20th International Conference on Pattern Recognition. Washington D. C., USA: IEEE Press, 2010: 2366-2369.
URL
|
32 |
PREUER K , RENZ P , UNTERTHINER T , et al. Fréchet ChemNet Distance: a metric for generative models for molecules in drug discovery. Journal of Chemical Information and Modeling, 2018, 58 (9): 1736- 1741.
doi: 10.1021/acs.jcim.8b00234
|
33 |
钱旭淼, 段锦, 刘举, 等. 基于注意力特征融合的图像去雾算法. 吉林大学学报(理学版), 2023, 61 (3): 567- 576.
doi: 10.13413/j.cnki.jdxblxb.2022252
|
|
QIAN X M , DUAN J , LIU J , et al. Image dehazing algorithm based on attention feature fusion. Journal of Jilin University(Science Edition), 2023, 61 (3): 567- 576.
doi: 10.13413/j.cnki.jdxblxb.2022252
|