[1] MELLOUK W, HANDOUZI W. Facial emotion recognition using deep learning:review and insights[J]. Procedia Computer Science, 2020, 175:689-694. [2] TANG X Y, PENG W Y, LIU S R, et al. Classroom teaching evaluation based on facial expression recognition[C]//Proceedings of the 9th International Conference on Educational and Information Technology. New York, USA:ACM Press, 2020:62-67. [3] AL-EIDAN R M, AL-KHALIFA H, AL-SALMAN A. Deep-learning-based models for pain recognition:a systematic review[J]. Applied Sciences, 2020, 10(17):5984. [4] HAPPY S L, ROUTRAY A. Robust facial expression classification using shape and appearance features[C]//Proceedings of the 8th International Conference on Advances in Pattern Recognition. Washington D.C., USA:IEEE Press, 2015:1-5. [5] 罗思诗, 李茂军, 陈满. 多尺度融合注意力机制的人脸表情识别网络[J]. 计算机工程与应用, 2023, 59(1):199-206. LUO S S, LI M J, CHEN M. Multi-scale integrated attention mechanism for facial expression recognition network[J]. Computer Engineering and Applications, 2023, 59(1):199-206.(in Chinese) [6] CAKIR D, YILMAZ G, ARICA N. Facial action unit detection with ViT and perceiver using landmark patches[C]//Proceedings of the 12th Annual Information Technology, Electronics and Mobile Communication Conference. Washington D.C., USA:IEEE Press, 2021:281-285. [7] CHEN H F, JIANG D M, ZHAO Y, et al. Region attentive action unit intensity estimation with uncertainty weighted multi-task learning[J]. IEEE Transactions on Affective Computing, 2023, 14(3):2033-2047. [8] ZHU X L, HE Z L, ZHAO L, et al. A cascade attention based facial expression recognition network by fusing multi-scale spatio-temporal features[J]. Sensors, 2022, 22(4):1350. [9] CHEN Y J, LIU S G. Deep partial occlusion facial expression recognition via improved CNN[M]. Berlin, Germany:Springer, 2020. [10] SONG L X, GONG D H, LI Z F, et al. Occlusion robust face recognition based on mask learning with pairwise differential Siamese network[EB/OL].[2023-04-11]. https://arxiv.org/pdf/1908.06290.pdf. [11] HE K M, CHEN X L, XIE S N, et al. Masked autoencoders are scalable vision learners[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C., USA:IEEE Press, 2022:16000-16009. [12] ZHAO K L, CHU W S, ZHANG H G. Deep region and multi-label learning for facial action unit detection[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C., USA:IEEE Press, 2016:3391-3399. [13] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. New York, USA:ACM Press, 2017:6000-6010. [14] ZAVASCHI T H H, KOERICH A L, OLIVEIRA L E S. Facial expression recognition using ensemble of classifiers[C]//Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing. Washington D.C., USA:IEEE Press, 2011:1489-1492. [15] CHEN S K, WANG J F, CHEN Y D, et al. Label distribution learning on auxiliary label space graphs for facial expression recognition[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C., USA:IEEE Press, 2020:13984-13993. [16] ZHAO Z, LIU Q, ZHOU F. Robust lightweight facial expression recognition network with label distribution training[C]//Proceedings of AAAI Conference on Artificial Intelligence. Palo Alto, USA:AAAI Press, 2021:3510-3519. [17] WANG K, PENG X, YANG J, et al. Region attention networks for pose and occlusion robust facial expression recognition[J]. IEEE Transactions on Image Processing, 2020, 29:4057-4069. [18] LIANG X, XU L, LIU J, et al. Patch attention layer of embedding handcrafted features in CNN for facial expression recognition[J]. Sensors, 2021, 21(3):833. [19] YANG W, GAO H W, JIANG Y Q, et al. A cascaded feature pyramid network with non-backward propagation for facial expression recognition[J]. IEEE Sensors Journal, 2021, 21(10):11382-11392. [20] DING H, ZHOU P, CHELLAPPA R. Occlusion-adaptive deep network for robust facial expression recognition[C]//Proceedings of 2020 IEEE International Joint Conference on Biometrics. New York, USA:ACM Press, 2020:1-9. [21] 王军, 赵凯, 程勇. 基于遮挡感知卷积神经网络的面部表情识别模型[J]. 计算机工程, 2021, 47(10):242-251. WANG J, ZHAO K, CHENG Y. Facial expression recognition model based on convolutional neural network with occlusion perception[J]. Computer Engineering, 2021, 47(10):242-251.(in Chinese) [22] KIM J, LEE D. Facial expression recognition robust to occlusion and to intra-similarity problem using relevant subsampling[J]. Sensors, 2023, 23(5):2619. [23] CHEN Y A, CHEN W C, WEI C P, et al. Occlusion-aware face inpainting via generative adversarial networks[C]//Proceedings of IEEE International Conference on Image Processing. Washington D.C., USA:IEEE Press, 2017:1202-1206. [24] LU Y, WANG S G, ZHAO W T, et al. WGAN-based robust occluded facial expression recognition[J]. IEEE Access, 2019, 7:93594-93610. [25] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words:Transformers for image recognition at scale[EB/OL].[2023-04-11]. https://arxiv.org/abs/2010.11929. [26] LI S, DENG W H, DU J P. Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C., USA:IEEE Press, 2017:2852-2861. [27] WANG K, PENG X J, YANG J F, et al. Suppressing uncertainties for large-scale facial expression recognition[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C., USA:IEEE Press, 2020:6897-6906. [28] MA F Y, SUN B, LI S T. Facial expression recognition with visual transformers and attentional selective fusion[J]. IEEE Transactions on Affective Computing, 2023, 14(2):1236-1248. [29] 冉瑞生, 翁稳稳, 王宁, 等. 基于人脸关键特征提取的表情识别[J]. 计算机工程, 2023, 49(2):254-262. RAN R S, WENG W W, WANG N, et al. Expression recognition based on the extraction of key facial features[J]. Computer Engineering, 2023, 49(2):254-262.(in Chinese) [30] FARD A P, MAHOOR M H. Ad-Corre:adaptive correlation-based loss for facial expression recognition in the wild[J]. IEEE Access, 2022, 10:26756-26768. |