1 |
|
|
GUO H, LI X Y, TANG J Y, et al. Adaptive feature fusion for multi-modal entity alignment[J/OL]. Journal of Automation: 1-13 [2023-02-02]. DOI: 10.16383/j.aas.c210518.(inChinese)
|
2 |
陈烨, 周刚, 卢记仓. 多模态知识图谱构建与应用研究综述. 计算机应用研究, 2021, 38(12): 3535- 3543.
URL
|
|
CHEN Y, ZHOU G, LU J C. Survey on construction and application research for multi-modal knowledge graphs. Application Research of Computers, 2021, 38(12): 3535- 3543.
URL
|
3 |
张天杭, 李婷婷, 张永刚. 基于知识图谱嵌入的多跳中文知识问答方法. 吉林大学学报(理学版), 2022, 60(1): 119- 126.
URL
|
|
ZHANG T H, LI T T, ZHANG Y G. Multi-hop Chinese knowledge question answering method based on knowledge graph embedding. Journal of Jilin University(Science Edition), 2022, 60(1): 119- 126.
URL
|
4 |
|
5 |
SUCHANEK F M, KASNECI G, WEIKUM G. YAGO: a core of semantic knowledge[C]//Proceedings of the 16th International Conference on World Wide Web. New York, USA: ACM Press, 2007: 697-706.
|
6 |
LEHMANN J, ISELE R, JAKOB M, et al. DBpedia—a large-scale, multilingual knowledge base extracted from Wikipedia. Semantic Web, 2015, 6(2): 167- 195.
doi: 10.3233/SW-140134
|
7 |
ZHANG Q, FU J L, LIU X Y, et al. Adaptive co-attention network for named entity recognition in tweets[C]//Proceedings of AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI Press, 2018: 1-8.
|
8 |
RADFORD A, KIM J W, HALLACY C, et al. Learning transferable visual models from natural language supervision[EB/OL]. [2023-02-02]. https://arxiv.org/abs/2103.00020.
|
9 |
|
10 |
LI G, DUAN N, FANG Y J, et al. Unicoder-VL: a universal encoder for vision and language by cross-modal pre-training[C]//Proceedings of AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI Press, 2020: 11336-11344.
|
11 |
|
12 |
CHEN Y C, LI L J, YU L C, et al. UNITER: universal image-text representation learning[C]//Proceedings of European Conference on Computer Vision. Berlin, Germany: Springer, 2020: 104-120.
|
13 |
LI X J, YIN X, LI C Y, et al. OSCAR: object-semantics aligned pre-training for vision-language tasks[C]//Proceedings of European Conference on Computer Vision. Berlin, Germany: Springer, 2020: 121-137.
|
14 |
DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional Transformers for language understanding[C]//Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1(Long and Short Papers). Philadelphia, USA: ACL Press, 2019: 4171-4186.
|
15 |
|
16 |
|
17 |
SARIYILDIZ M B, PEREZ J, LARLUS D. Learning visual representations with caption annotations[M]. Berlin, Germany: Springer, 2020.
|
18 |
|
19 |
HUANG Z C, ZENG Z Y, HUANG Y P, et al. Seeing out of the box: end-to-end pre-training for vision-language representation learning[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2021: 12971-12980.
|
20 |
ZHUGE M C, GAO D H, FAN D P, et al. Kaleido-BERT: vision-language pre-training on fashion domain[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA. IEEE Press, 2021: 12642-12652.
|
21 |
|
22 |
GAO D H, JIN L B, CHEN B, et al. FashionBERT: text and image matching with adaptive loss for cross-modal retrieval[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York, USA: ACM Press, 2020: 2251-2260.
|
23 |
XU K, BA J, KIROS R, et al. Show, attend and tell: neural image caption generation with visual attention[C]//Proceedings of the 32nd International Conference on Machine Learning. New York, USA: ACM Press, 2015: 2048-2057.
|
24 |
SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the Inception architecture for computer vision[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Washington D. C., USA. IEEE Press, 2016: 2818-2826.
|
25 |
XIE S N, GIRSHICK R, DOLLÁR P, et al. Aggregated residual transformations for deep neural networks[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Washington D. C., USA. IEEE Press, 2017: 5987-5995.
|
26 |
|
27 |
|
28 |
|
29 |
|
30 |
吴俊, 程垚, 郝瀚, 等. 基于BERT嵌入BiLSTM-CRF模型的中文专业术语抽取研究. 情报学报, 2020, 39(4): 409- 418.
doi: 10.3772/j.issn.1000-0135.2020.04.007
|
|
WU J, CHENG Y, HAO H, et al. Automatic extraction of Chinese terminology based on BERT embedding and BiLSTM-CRF model. Journal of the China Society for Scientific and Technical Information, 2020, 39(4): 409- 418.
doi: 10.3772/j.issn.1000-0135.2020.04.007
|