[1] CHOPRA A, JAIN R, HEMANI M, et al. ZFlow:gated appearance flow-based virtual try-on with 3D priors[C]//Proceedings of IEEE/CVF International Conference on Computer Vision. Washington D.C.,USA:IEEE Press,2021:5433-5442. [2] BHATNAGAR B, TIWARI G, THEOBALT C, et al. Multi-garment net:learning to dress 3D people from images[C]//Proceedings of IEEE/CVF International Conference on Computer Vision. Washington D.C.,USA:IEEE Press,2019:5419-5429. [3] WANG T C, MALLYA A, LIU M Y. One-shot free-view neural talking-head synthesis for video conferencing[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2021:10039-10049. [4] NARUNIEC J, HELMINGER L, SCHROERS C, et al. High-resolution neural face swapping for visual effects[J]. Computer Graphics Forum, 2020, 39(4):173-184. [5] MA L, JIA X, SUN Q, et al. Pose guided person image generation[EB/OL].[2023-06-05].https://arxiv.org/abs/1705.09368. [6] MA L Q, SUN Q R, GEORGOULIS S, et al. Disentangled person image generation[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2018:99-108. [7] NI H M, LIU Y H, HUANG S X, et al. Cross-identity video motion retargeting with joint transformation and synthesis[C]//Proceedings of IEEE/CVF Winter Conference on Applications of Computer Vision. Washington D.C.,USA:IEEE Press, 2023:412-422. [8] ZHAO L, PENG X, TIAN Y, et al. Learning to forecast and refine residual motion for image-to-video generation[C]//Proceedings of European Conference on Computer Vision. Berlin,Germany:Springer,2018:403-419. [9] REN Y R, LI G, CHEN Y Q, et al. PIRenderer:controllable portrait image generation via semantic neural rendering[C]//Proceedings of IEEE/CVF International Conference on Computer Vision. Washington D.C.,USA:IEEE Press,2021:13759-13768. [10] LIU W, PIAO Z X, MIN J, et al. Liquid warping GAN:a unified framework for human motion imitation, appearance transfer and novel view synthesis[C]//Proceedings of IEEE/CVF International Conference on Computer Vision. Washington D.C.,USA:IEEE Press,2019:5903-5912. [11] PUMAROLA A, AGUDO A, MARTINEZ A M, et al. GANimation:anatomically-aware facial animation from a single image[C]//Proceedings of the 15th European Conference on Computer Vision. New York,USA:ACM Press,2018:835-851. [12] EKMAN P, ROSENBERG E L. What the face reveals:basic and applied studies of spontaneous expression using the facial action coding system (FACS)[M]. 2nd ed. Oxford, UK:Oxford University Press, 2005. [13] SIAROHIN A, LATHUILIERE S, TULYAKOV S, et al. Animating arbitrary objects via deep motion transfer[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2019:2377-2386. [14] TAO J L, WANG B, XU B R, et al. Structure-aware motion transfer with deformable anchor model[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2022:3627-3636. [15] TAO J L, WANG B, GE T Z, et al. Motion transformer for unsupervised image animation[EB/OL].[2023-06-05]. https://arxiv.org/abs/2209.14024. [16] SIAROHIN A, LATHUILIōRE S, TULYAKOV S, et al. First order motion model for image animation[EB/OL].[2023-06-05]. https://arxiv.org/abs/2003.00196. [17] SIAROHIN A, WOODFORD O J, REN J, et al. Motion representations for articulated animation[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2021:13648-13657. [18] ZHAO J, ZHANG H. Thin-plate spline motion model for image animation[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2022:3657-3666. [19] FARNEBÄCK G. Two-frame motion estimation based on polynomial expansion[EB/OL].[2023-06-05]. https://link.springer.com/chapter/10.1007/3-540-45103-X_50. [20] BOOKSTEIN F L. Principal warps:thin-plate splines and the decomposition of deformations[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1989, 11(6):567-585. [21] RONNEBERGER O, FISCHER P, BROX T. U-Net:convolutional networks for biomedical image segmentation[EB/OL].[2023-06-05]. https://arxiv.org/abs/1505.04597. [22] BULAT A, TZIMIROPOULOS G. How far are we from solving the 2D&3D face alignment problem?(and a dataset of 230, 0003D facial landmarks)[C]//Proceedings of IEEE International Conference on Computer Vision. Washington D.C.,USA:IEEE Press,2017:1021-1030. [23] LIANG J, ZENG H, ZHANG L. High-resolution photorealistic image translation in real-time:a Laplacian pyramid translation network[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2021:9392-9400. [24] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2016:770-778. [25] 兰朝凤,蒋朋威,陈欢,等.结合光流算法与注意力机制的U-Net网络跨模态视听语音分离[J].电子与信息学报, 2023, 45(10):3538-3546. LAN C F, JIANG P W, CHEN H, et al. Cross-modal audiovisual separation based on U-Net network combining optical flow algorithm and attention mechanism[J]. Journal of Electronics&Information Technology, 2023, 45(10):3538-3546.(in Chinese) [26] HEITZ D, MÉMIN E, SCHNÖRR C. Variational fluid flow measurements from image sequences:synopsis and perspectives[J]. Experiments in Fluids, 2010, 48(3):369-393. [27] WANG Q L, WU B G, ZHU P F, et al. ECA-Net:efficient channel attention for deep convolutional neural networks[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2020:11534-11542. [28] 熊中敏,曾旗,卢鹏,等.基于残差注意力多尺度关系网络的逻辑推理[J].计算机工程, 2023, 49(6):227-233, 241. XIONG Z M, ZENG Q, LU P, et al. Logical reasoning based on residual attention multi-scale relation network[J]. Computer Engineering, 2023, 49(6):227-233, 241.(in Chinese) [29] HOU Q B, ZHOU D Q, FENG J S. Coordinate attention for efficient mobile network design[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2021:13713-13722. [30] 孙伟,常鹏帅,戴亮,等.基于注意力引导数据增强的车型识别[J].计算机工程, 2022, 48(7):300-306. SUN W, CHANG P S, DAI L, et al. Vehicle type recognition based on attention guided data augmentation[J]. Computer Engineering, 2022, 48(7):300-306.(in Chinese) [31] JOHNSON J, ALAHI A, LI F F. Perceptual losses for real-time style transfer and super-resolution[C]//Proceedings of European Conference on Computer Vision. Berlin,Germany:Springer,2016:694-711. [32] AIFANTI N, PAPACHRISTOU C, DELOPOULOS A. The mug facial expression database[EB/OL].[2023-06-05]. https://mug.ee.auth.gr/fed/. [33] DIBEKLIOĞLU H, ALI SALAH A, GEVERS T. Are you really smiling at me?spontaneous versus posed enjoyment smiles[EB/OL].[2023-06-05]. https://link.springer.com/content/pdf/10.1007/978-3-642-33712-3_38. [34] ZHAO G Y, HUANG X H, TAINI M, et al. Facial expression recognition from near-infrared videos[J]. Image and Vision Computing, 2011, 29(9):607-619. [35] WANG W, ALAMEDA-PINEDA X, XU D, et al. Every smile is unique:landmark-guided diverse smile generation[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2018:7083-7092. [36] AMOS B, LUDWICZUK B, SATYANARAYANAN M. OpenFace:a general-purpose face recognition library with mobile applications[EB/OL].[2023-06-05].https://elijah.cs.cmu.edu/DOCS/CMU-CS-16-118.pdf. [37] ALLEN D M. Mean square error of prediction as a criterion for selecting variables[J]. Technometrics, 1971, 13(3):469-475. |