[1] LIES T. Clues to deceit in the marketplace, politics, and marriage[M]. New York,USA:[s.n.], 1992. [2] EKMAN P, FRIESEN W V. Constants across cultures in the face and emotion[J]. PLoS One, 1971, 17(2):124-129. [3] FERRETTI V, PAPALEO F. Understanding others:emotion recognition in humans and other animals[J]. Genes, Brain and Behavior, 2019, 18(1):e12544. [4] GOTTMAN J M, LEVENSON R W. A two-factor model for predicting when a couple will divorce:exploratory analyses using 14-year longitudinal data[J]. Family Process, 2002, 41(1):83-96. [5] SALTER F, GRAMMER K, RIKOWSKI A. Sex differences in negotiating with powerful males:an ethological analysis of approaches to nightclub doormen[J]. Human Nature (Hawthorne, N Y), 2005, 16(3):306-321. [6] 陈庄,赵源,罗颂,等.双通道动静态特征的微表情识别[J].小型微型计算机系统, 2023, 44(7):1500-1507. CHEN Z, ZHAO Y, LUO S, et al. Micro-expression recognition based on dynamic and static features of two channels[J]. Journal of Chinese Computer Systems, 2023, 44(7):1500-1507.(in Chinese) [7] PORTER S, TEN BRINKE L. Reading between the lies:identifying concealed and falsified emotions in universal facial expressions[J]. Psychological Science, 2008, 19(5):508-514. [8] LU Y, WANG S G, ZHAO W T, et al. WGAN-based robust occluded facial expression recognition[J]. IEEE Access, 2019, 7:93594-93610. [9] CORNEJO J Y R, PEDRINI H. Emotion recognition based on occluded facial expressions[EB/OL].[2023-09-05] . https://link.springer.com/chapter/10.1007/978-3-319-68560-1_28. [10] POUX D, ALLAERT B, IHADDADENE N, et al. Dynamic facial expression recognition under partial occlusion with optical flow reconstruction[J]. IEEE Transactions on Image Processing, 2022, 31:446-457. [11] MATHIS A, NOTHWANG W, DONAVANIK D, et al. Making optic flow robust to dynamic lighting conditions for real-time operation[EB/OL].[2023-09-05] . https://apps.dtic.mil/sti/tr/pdf/AD1005369.pdf. [12] SAXENA D, CAO J N. Generative Adversarial Networks (GANs):challenges, solutions, and future directions[J]. ACM Computing Surveys, 2021, 54(3):1-42. [13] LIU S S, ZHANG Y, LIU K P, et al. Facial expression recognition under partial occlusion based on Gabor multi-orientation features fusion and local Gabor binary pattern histogram sequence[C]//Proceedings of the 9th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. Washington D.C.,USA:IEEE Press,2013:218-222. [14] ZHANG L G, TJONDRONEGORO D, CHANDRAN V. Random Gabor based templates for facial expression recognition in images with facial occlusion[J]. Neurocomputing, 2014, 145:451-464. [15] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[EB/OL].[2023-09-05] . https://arxiv.org/abs/1706.03762. [16] LI Y, ZENG J B, SHAN S G, et al. Patch-gated CNN for occlusion-aware facial expression recognition[C]//Proceedings of the 24th International Conference on Pattern Recognition. Washington D.C.,USA:IEEE Press,2018:2209-2214. [17] DING H, ZHOU P, CHELLAPPA R, et al. Occlusion-adaptive deep network for robust facial expression recognition[C]//Proceedings of the 2020 IEEE International Joint Conference on Biometrics. New York,USA:ACM Press,2020:1-9. [18] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words:transformers for image recognition at scale[EB/OL].[2023-09-05] . https://arxiv.org/abs/2010.11929. [19] FARZANEH A H, QI X J. Facial expression recognition in the wild via deep attentive center loss[C]//Proceedings of the IEEE Winter Conference on Applications of Computer Vision. Washington D.C.,USA:IEEE Press,2021:2402-2411. [20] 李晶,李健,陈海丰,等.基于关键区域遮挡与重建的人脸表情识别[J].计算机工程, 2024, 50(5):241-249. LI J, LI J, CHEN H F, et al. Facial expression recognition based on key region masking and reconstruction[J]. Computer Engineering, 2024, 50(5):241-249.(in Chinese) [21] MA F Y, SUN B, LI S T. Facial expression recognition with visual transformers and attentional selective fusion[J]. IEEE Transactions on Affective Computing, 2023, 14(2):1236-1248. [22] GAO J, ZHAO Y. TFE:a transformer architecture for occlusion aware facial expression recognition[J]. Frontiers in Neurorobotics, 2021, 15:763100. [23] LIU C, HIROTA K, DAI Y P. Patch attention convolutional vision transformer for facial expression recognition with occlusion[J]. Information Sciences, 2023, 619:781-794. [24] LI H T, SUI M Z, ZHAO F, et al. MVT:mask vision transformer for facial expression recognition in the wild[EB/OL].[2023-09-05] .https://arxiv.org/abs/2106.04520v2. [25] ZHENG C, MENDIETA M, CHEN C. POSTER:a pyramid cross-fusion transformer network for facial expression recognition[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Washington D.C.,USA:IEEE Press,2023:3146-3155. [26] ZHANG L F, HONG X P, ARANDJELOVIC O, et al. Short and long range relation based spatio-temporal transformer for micro-expression recognition[J]. IEEE Transactions on Affective Computing, 2022, 13(4):1973-1985. [27] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2018:7132-7141. [28] WOO S, PARK J, LEE J Y, et al. CBAM:convolutional block attention module[EB/OL].[2023-09-05] . https://arxiv.org/abs/1807.06521. [29] ZHAO Z Q, LIU Q S, WANG S M. Learning deep global multi-scale and local attention features for facial expression recognition in the wild[J]. IEEE Transactions on Image Processing, 2021, 30:6544-6556. [30] GERA D, BALASUBRAMANIAN S. Landmark guidance independent spatio-channel attention and complementary context information based facial expression recognition[J]. Pattern Recognition Letters, 2021, 145:58-66. [31] WEN Z, LIN W, WANG T, et al. Distract your attention:multi-head cross attention network for facial expression recognition[J]. Biomimetics (Basel, Switzerland), 2023, 8(2):199. [32] TOWNER H, SLATER M. Reconstruction and recognition of occluded facial expressions using PCA[EB/OL].[2023-09-05] . https://link.springer.com/chapter/10.1007/978-3-540-74889-2_4. [33] LIN J C, WU C H, WEI W L. Facial action unit prediction under partial occlusion based on error weighted cross-correlation model[C]//Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. Washington D.C.,USA:IEEE Press,2013:3482-3486. [34] CORNEJO J Y R, PEDRINI H. Emotion recognition based on occluded facial expressions[EB/OL].[2023-09-05] . https://link.springer.com/chapter/10.1007/978-3-319-68560-1_28. [35] MAO X, XUE Y L, LI Z, et al. Robust facial expression recognition based on RPCA and AdaBoost[C]//Proceedings of the 10th Workshop on Image Analysis for Multimedia Interactive Services. Washington D.C.,USA:IEEE Press,2009:113-116. [36] JIANG B, JIA K B. Research of robust facial expression recognition under facial occlusion condition[EB/OL].[2023-09-05] . https://link.springer.com/chapter/10.1007/978-3-642-23620-4_13. [37] JIANG M Y, WANG Y W, MCKEOWN M J, et al. Occlusion-robust FAU recognition by mining latent space of masked autoencoders[EB/OL].[2023-09-05] .https://arxiv.org/abs/2212.04029v1. [38] 杨鲁月,张树美,赵俊莉.基于并行Gan的有遮挡动态表情识别[J].计算机工程与应用, 2021, 57(24):168-178. YANG L Y, ZHANG S M, ZHAO J L. Dynamic expression recognition with partial occlusion based on parallel Gan[J]. Computer Engineering and Applications, 2021, 57(24):168-178.(in Chinese) [39] MA B W, AN R D, ZHANG W, et al. Facial action unit detection and intensity estimation from self-supervised representation[EB/OL].[2023-09-05] .https://arxiv.org/abs/2210.15878v1. [40] 王海涌,梁红珠.基于改进的GAN的局部遮挡人脸表情识别[J].计算机工程与应用, 2020, 56(5):141-146. WANG H Y, LIANG H Z. Recognition of local occluded facial expressions based on improved generative adversarial network[J]. Computer Engineering and Applications, 2020, 56(5):141-146.(in Chinese) [41] YAN W J, WU Q, LIU Y J, et al. CASME database:a dataset of spontaneous micro-expressions collected from neutralized faces[C]//Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. Washington D.C.,USA:IEEE Press,2013:1-7. [42] QU F B, WANG S J, YAN W J, et al. CAS (ME)2:a database for spontaneous macro-expression and micro-expression spotting and recognition[J]. IEEE Transactions on Affective Computing, 2017, 9(4):424-436. [43] LI X B, PFISTER T, HUANG X H, et al. A spontaneous micro-expression database:inducement, collection and baseline[C]//Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. Washington D.C.,USA:IEEE Press,2013:1-6. [44] FARNEBÄCK G. Two-frame motion estimation based on polynomial expansion[EB/OL].[2023-09-05] . https://link.springer.com/chapter/10.1007/3-540-45103-X_50. [45] ZACH C, POCK T, BISCHOF H. A duality based approach for realtime TV-L 1 optical flow[EB/OL].[2023-09-05] . https://link.springer.com/chapter/10.1007/978-3-540-74936-3_22. [46] BILEN H, FERNANDO B, GAVVES E, et al. Dynamic image networks for action recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2016:3034-3042. [47] GARBACEA C, VAN DEN OORD A, LI Y Z, et al. Low bit-rate speech coding with VQ-VAE and a WaveNet decoder[C]//Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing. Washington D.C.,USA:IEEE Press,2019:735-739. [48] CHEN Y J, CHENG SHIN-I, CHIU W C, et al. Vector quantized image-to-image translation[EB/OL].[2023-09-05] . https://arxiv.org/abs/2207.13286. [49] VAN DEN OORD A, VINYALS O, KAVUKCUOGLU K, et al. Neural discrete representation learning[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. New York,USA:ACM Press,2017:6309-6318. [50] VAN DEN OORD A, KALCHBRENNER N, KAVUKCUOGLU K, et al. Pixel recurrent neural networks[C]//Proceedings of the 33rd International Conference on Machine Learning. New York,USA:ACM Press,2016:1747-1756. [51] RAZAVI A, VAN DEN OORD A, VINYALS O. Generating diverse highfidelity images with VQ-VAE-2[EB/OL].[2023-09-05] . https://arxiv.org/abs/1906.00446. [52] SHANNON C. Coding theorems for a discrete source with a fidelity criterion[EB/OL].[2023-09-05] . https://gwern.net/doc/cs/algorithm/information/1959-shannon.pdf. [53] BACHLECHNER T, MAJUMDER B P, MAO H H, et al. ReZero is all you need:fast convergence at large depth[EB/OL].[2023-09-05] . https://arxiv.org/abs/2003.04887v2. [54] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2016:770-778. [55] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL].[2023-09-05] .https://arxiv.org/abs/1409.1556v6. [56] SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2016:2818-2826. [57] ZHAO G, PIETIKÄINEN M. Dynamic texture recognition using local binary patterns with an application to facial expressions[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6):915-928. [58] CHAUDHRY R, RAVICHANDRAN A, HAGER G, et al. Histograms of oriented optical flow and Binet-Cauchy kernels on nonlinear dynamical systems for the recognition of human actions[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2009:1932-1939. [59] LIONG S T, SEE J, WONG K, et al. Less is more:micro-expression recognition from video using Apex frame[J]. Signal Processing:Image Communication, 2018, 62:82-92. [60] LIU Y J, ZHANG J K, YAN W J, et al. A main directional mean optical flow feature for spontaneous micro-expression recognition[J]. IEEE Transactions on Affective Computing, 2016, 7(4):299-310. |