[1]Liu P, Ji L, Ye F, et al. Advmil: Adversarial multiple instance learning for the survival analysis on whole-slide images[J]. Medical Image Analysis, 2024, 91: 103020.
[2]曹广硕,黄瑞章,陈艳平,等.基于多模态学习的乳腺癌生存预测研究[J].计算机工程,2024,50(01):296-305.
Cao Guangshuo, Huang Ruizhang, Chen Yanping, et al. Research on Breast Cancer Survival Prediction Based on Multimodal Learning[J]. Computer Engineering, 2024, 50(01): 296-305.
[3]张雪芹,李悦欣,刘畅,等.融合病理图像和基因组学多模态的癌症生存预测[J].华东理工大学学报(自然科学版),2025,51(04):505-513.
Zhang Xueqin, Li Yuexin, Liu Chang, etc Cancer survival prediction using multimodal fusion of pathological images and genomics [J]. Journal of East China University of Science and Technology (Natural Science Edition), 2025, 51 (04): 505-513.
[4]Song A H, Chen R J, Jaume G, et al. Multimodal prototyping for cancer survival prediction[J]. arxiv preprint arxiv:2407.00224, 2024.
[5]Chen R J, Lu M Y, Weng W H, et al. Multimodal co-attention transformer for survival prediction in gigapixel whole slide images[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 4015-4025.
[6]Liu M, Liu Y, Cui H, et al. Mgct: Mutual-guided cross-modality transformer for survival outcome prediction using integrative histopathology-genomic features[C]//2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2023: 1306-1312.
[7]Xu Y, Chen H. Multimodal optimal transport-based co-attention transformer with global structure consistency for survival prediction[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2023: 21241-21251.
[8]Steyaert S, Pizurica M, Nagaraj D, et al. Multimodal data fusion for cancer biomarker discovery with deep learning[J]. Nature machine intelligence, 2023, 5(4): 351-362.
[9]Liberzon A, Birger C, Thorvaldsdóttir H, et al. The molecular signatures database hallmark gene set collection[J]. Cell systems, 2015, 1(6): 417-425.
[10]Lu M Y, Williamson D F K, Chen T Y, et al. Data-efficient and weakly supervised computational pathology on whole-slide images[J]. Nature biomedical engineering, 2021, 5(6): 555-570.
[11]Long L, Cui J, Zeng P, et al. MuGI: Multi-Granularity Interactions of Heterogeneous Biomedical Data for Survival Prediction[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer Nature Switzerland, 2024: 490-500.
[12]Jaume G, Vaidya A, Chen R J, et al. Modeling dense multimodal interactions between biological pathways and histology for survival prediction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 11579-11590.
[13]Song C, Chen H, Zheng H, et al. Mome: Mixture of multimodal experts for cancer survival prediction[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer Nature Switzerland, 2024: 318-328.
[14]Luo H, Huang J, Ju H, et al. M2EF-NNs: Multimodal Multi-instance Evidence Fusion Neural Networks for Cancer Survival Prediction[J]. arxiv preprint arxiv:2408.04170, 2024.
[15]Zhou F, Chen H. Cross-modal translation and alignment for survival analysis[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 21485-21494.
[16]Wang Z, Zhang Y, Xu Y, et al. Histo-genomic knowledge distillation for cancer prognosis from histopathology whole slide images[J]. arxiv preprint arxiv:2403.10040, 2024.
[17]Hu Y, Li X, Yi Y, et al. Deep learning-driven survival prediction in pan-cancer studies by integrating multimodal histology-genomic data[J]. Briefings in Bioinformatics, 2025, 26(2): bbaf121.
[18]Jiang S, Gan Z, Cai L, et al. Multimodal Cross-Task Interaction for Survival Analysis in Whole Slide Pathological Images[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer Nature Switzerland, 2024: 329-339.
[19]Chen R J, Ding T, Lu M Y, et al. Towards a general-purpose foundation model for computational pathology[J]. Nature Medicine, 2024, 30(3): 850-862.
[20]Shao Z, Bian H, Chen Y, et al. Transmil: Transformer based correlated multiple instance learning for whole slide image classification[J]. Advances in neural information processing systems, 2021, 34: 2136-2147.
[21]Wang H, Liu H, Du F, et al. MDPNet: a dual-path parallel fusion network for multi-modal MRI glioma genotyping[J]. Frontiers in Oncology, 2025, 15: 1574861.
[22]Goldman M J, Craft B, Hastie M, et al. Visualizing and interpreting cancer genomics data via the Xena platform[J]. Nature biotechnology, 2020, 38(6): 675-678.
[23]Gao Y, Niu F, Qin H, et al. Enhancing glioma immunohistochemical image classification through color deconvolution-aware prior guidance[J]. PLoS One, 2025, 20(9): e0324359.
[24]Klambauer G, Unterthiner T, Mayr A, et al. Self-normalizing neural networks[J]. Advances in neural information processing systems, 2017, 30.
[25]Ilse M, Tomczak J, Welling M. Attention-based deep multiple instance learning[C]//International conference on machine learning. PMLR, 2018: 2127-2136.
[26]Yao J, Zhu X, Jonnagaddala J, et al. Whole slide images based cancer survival prediction using attention guided deep multiple instance learning networks[J]. Medical image analysis, 2020, 65: 101789.
[27]Lundberg S M, Lee S I. A unified approach to interpreting model predictions[J]. Advances in neural information processing systems, 2017, 30.
[28]Eijpe A, Lakbir S, Cesur M E, et al. Disentangled and Interpretable Multimodal Attention Fusion for Cancer Survival Prediction[J]. arxiv preprint arxiv:2503.16069, 2025.
[29]Matsuo K, Purushotham S, Jiang B, et al. Survival outcome prediction in cervical cancer: Cox models vs deep-learning model[J]. American journal of obstetrics and gynecology, 2019, 220(4): 381. e1-381. e1.
|