[1] LI J, SUN A X, HAN J L, et al.A survey on deep learning for named entity recognition[J].IEEE Transactions on Knowledge and Data Engineering, 2022, 34(1):50-70. [2] MOON S, NEVES L, CARVALHO V.Multimodal named entity recognition for short social media posts[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologie.Stroudsburg, USA:Association for Computational Linguistics, 2018:852-860. [3] ZHANG Q, FU J, LIU X, et al.Adaptive co-attention network for named entity recognition in Tweets[C]//Proceedings of the 32th AAAI Conference on Artificial Intelligence.Palo Alto, USA:AAAI Press, 2018:5674-5681. [4] LU D, NEVES L, CARVALHO V, et al.Visual attention model for name tagging in multimodal social media[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.Stroudsburg, USA:Association for Computational Linguistics, 2018:1990-1999. [5] HE K M, ZHANG X Y, REN S Q, et al.Deep residual learning for image recognition[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2016:770-778. [6] YU J F, JIANG J, YANG L, et al.Improving multimodal named entity recognition via entity span detection with unified multimodal transformer[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.Stroudsburg, USA:Association for Computational Linguistics, 2020:3342-3352. [7] DEVLIN J, CHANG M W, LEE K, et al.BERT:pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.Stroudsburg, USA:Association for Computational Linguistics, 2019:4171-4186. [8] CIPOLLA R, GAL Y, KENDALL A.Multi-task learning using uncertainty to weigh losses for scene geometry and semantics[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:7482-7491. [9] LI Y J, CARAGEA C.Multi-task stance detection with sentiment and stance lexicons[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing.Stroudsburg, USA:Association for Computational Linguistics, 2019:6298-6304. [10] CLARK K, LUONG M T, KHANDELWAL U, et al.BAM! Born-again multi-task networks for natural language understanding[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.Stroudsburg, USA:Association for Computational Linguistics, 2019:5931-5937. [11] WANG X, LYU J G, DONG L, et al.Multitask learning for biomedical named entity recognition with cross-sharing structure[J].BMC Bioinformatics, 2019, 20(1):427. [12] ZHANG Y, YANG Q.An overview of multi-task learning[J].National Science Review, 2018, 5(1):30-43. [13] CHEN Z, BADRINARAYANAN V, LEE C Y, et al.GradNorm:gradient normalization for adaptive loss balancing in deep multitask networks[C]//Proceedings of the 35th International Conference on Machine Learning.Stockholm, Sweden:PMLR, 2018:794-803. [14] YANG Y, HOSPEDALES T M.Deep multi-task representation learning:a tensor factorisation approach[C]//Proceedings of the 5th International Conference on Learning Representations.Toulon, France:[s.n.], 2017:1-14. [15] REI M.Semi-supervised multitask learning for sequence labeling[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics.Stroudsburg, USA:Association for Computational Linguistics, 2017:2121-2130. [16] LIN Y, YANG S Q, STOYANOV V, et al.A multi-lingual multi-task architecture for low-resource sequence labeling[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.Stroudsburg, USA:Association for Computational Linguistics, 2018:799-809. [17] VASWANI A, SHAZEER N, PARMAR N, et al.Attention is all you need[C]//Proceedings of NIPS'17.Cambridge, USA:MIT Press, 2017:5998-6008. [18] CHEN T, KORNBLITH S, NOROUZI M, et al.A simple framework for contrastive learning of visual representations[C]//Proceedings of the 37th International Conference on Machine Learning.New York, USA:ACM Press, 2020:1597-1607. [19] FANG H C, WANG S C, ZHOU M, et al.CERT:contrastive self-supervised learning for language understanding[EB/OL].[2022-01-12].https://arxiv.org/abs/2005.12766. [20] GIORGI J, NITSKI O, WANG B, et al.DeCLUTR:deep contrastive learning for unsupervised textual representations[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.Stroudsburg, USA:Association for Computational Linguistics, 2021:879-895. [21] WU Z F, WANG S N, GU J T, et al.CLEAR:contrastive learning for sentence representation[EB/OL].[2022-01-12].https://arxiv.org/abs/2012.15466. [22] YOU Y N, CHEN T L, SHEN Y, et al.Graph contrastive learning automated[EB/OL].[2022-01-12].https://arxiv.org/abs/2106.07594. [23] MA X Z, HOVY E.End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics.Stroudsburg, USA:Association for Computational Linguistics, 2016:1064-1074. [24] HUANG Z H, XU W, YU K.Bidirectional LSTM-CRF models for sequence tagging[EB/OL].[2022-01-12].https://arxiv.org/abs/1508.01991. [25] LAMPLE G, BALLESTEROS M, SUBRAMANIAN S, et al.Neural architectures for named entity recognition[C]//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.Stroudsburg, USA:Association for Computational Linguistics, 2016:260-270. |