1 |
|
2 |
|
3 |
|
4 |
赵欣, 李森, 李智生. 基于CNN和Transformer并行编码的腹部多器官图像分割. 吉林大学学报(理学版), 2024, 62 (5): 1145- 1154.
|
|
ZHAO X , LI S , LI Z S . Abdominal multi-organ image segmentation based on parallel coding of CNN and Transformer. Journal of Jilin University (Science Edition), 2024, 62 (5): 1145- 1154.
|
5 |
WANG H Y, GUO S Z, YE J, et al. SAM-Med3D: towards general-purpose segmentation models for volumetric medical images[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2310.15161v3.
|
6 |
PANDEY S, CHEN K F, DAM E B. Comprehensive multimodal segmentation in medical imaging: combining YOLOv8 with SAM and HQ-SAM models[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). Washington D.C., USA: IEEE Press, 2023: 2584-2590.
|
7 |
PARULEKAR B , SINGH N , RAMIYA A M . Evaluation of Segment Anything Model (SAM) for automated labelling in machine learning classification of UAV geospatial data. Earth Science Informatics, 2024, 17 (5): 4407- 4418.
doi: 10.1007/s12145-024-01402-7
|
8 |
HETANG C R, XUE H R, LE C, et al. Segment anything model for road network graph extraction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Washington D.C., USA: IEEE Press, 2024: 2556-2566.
|
9 |
ZHAO X Q , WU Z , CHEN Y B , et al. Fine-grained high-resolution remote sensing image change detection by SAM-U-Net change detection model. Remote Sensing, 2024, 16 (19): 3620.
doi: 10.3390/rs16193620
|
10 |
ZHANG J J, BAI C J, HE H R, et al. SAM-E: leveraging visual foundation model with sequence imitation for embodied manipulation[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2405.19586v1.
|
11 |
|
12 |
AHMADI M, LONBAR A G, NAEINI H K, et al. Application of segment anything model for civil infrastructure defect assessment[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2304.12600v2.
|
13 |
KIRILLOV A, MINTUN E, RAVI N, et al. Segment anything[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C., USA: IEEE Press, 2023: 3992-4003.
|
14 |
ZHANG Y C , SHEN Z R , JIAO R S . Segment anything model for medical image segmentation: current applications and future directions. Computers in Biology and Medicine, 2024, 171, 108238.
doi: 10.1016/j.compbiomed.2024.108238
|
15 |
王淼, 黄智忠, 何晖光, 等. 分割一切模型SAM的潜力与展望: 综述. 中国图象图形学报, 2024, 29 (6): 1479- 1509.
|
|
WANG M , HUANG Z Z , HE H G , et al. Potential and prospects of segment anything model: a survey. Journal of Image and Graphics, 2024, 29 (6): 1479- 1509.
|
16 |
孙兴, 蔡肖红, 李明, 等. 视觉大模型SAM在医学图像分割中的应用综述. 计算机工程与应用, 2024, 60 (17): 1- 16.
|
|
SUN X , CAI X H , LI M , et al. Review of application of visual foundation model SAM in medical image segmentation. Computer Engineering and Applications, 2024, 60 (17): 1- 16.
|
17 |
ALI M , WU T , HU H J , et al. A review of the Segment Anything Model (SAM) for medical image analysis: accomplishments and perspectives. Computerized Medical Imaging and Graphics, 2025, 119, 102473.
doi: 10.1016/j.compmedimag.2024.102473
|
18 |
|
19 |
RONNEBERGER O , FISCHER P , BROX T . U-Net: convolutional networks for biomedical image segmentation. Berlin, Germany: Springer International Publishing, 2015.
|
20 |
|
21 |
HE K M, CHEN X L, XIE S N, et al. Masked autoencoders are scalable vision learners[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2022: 15979-15988.
|
22 |
|
23 |
|
24 |
|
25 |
XIONG Y Y, VARADARAJAN B, WU L M, et al. EfficientSAM: leveraged masked image pretraining for efficient segment anything[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2024: 16111-16121.
|
26 |
ZHANG Z Y, CAI H, HAN S. EfficientViT-SAM: accelerated segment anything model without performance loss[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Washington D.C., USA: IEEE Press, 2024: 7859-7863.
|
27 |
|
28 |
|
29 |
SONG Y, ZHOU Q, LI X, et al. BA-SAM: scalable bias-mode attention mask for segment anything model[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C., USA: IEEE Press, 2024: 3162-3173.
|
30 |
|
31 |
FENG Z S , ZHANG Y L , CHEN Y H , et al. SwinSAM: fine-grained polyp segmentation in colonoscopy images via segment anything model integrated with a Swin Transformer decoder. Biomedical Signal Processing and Control, 2025, 100, 107055.
doi: 10.1016/j.bspc.2024.107055
|
32 |
|
33 |
ZHANG L, LIANG Y, ZHANG R, et al. BLO-SAM: bi-level optimization based finetuning of the segment anything model for overfitting-preventing semantic segmentation[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2402.16338.
|
34 |
JIANG M Z, ZHOU J Y, WU J D, et al. Uncertainty-Aware Adapter: adapting Segment Anything Model (SAM) for ambiguous medical image segmentation[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2403.10931v2.
|
35 |
WU J D, JI W, LIU Y P, et al. Medical SAM adapter: adapting segment anything model for medical image segmentation[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2304.12620v7.
|
36 |
MA J , HE Y , LI F , et al. Segment anything in medical images. Nature Communications, 2024, 15 (1): 654.
doi: 10.1038/s41467-024-44824-z
|
37 |
|
38 |
GAO Y F, XIA W, HU D D, et al. DeSAM: decoupled segment anything model for generalizable medical image segmentation[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2306.00499v2.
|
39 |
CHEN T R, ZHU L Y, DING C T, et al. SAM-Adapter: adapting segment anything in underperformed scenes[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). Washington D.C., USA: IEEE Press, 2023: 3359-3367.
|
40 |
|
41 |
LI B, XIAO H K, TANG L. ASAM: boosting segment anything model with adversarial tuning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2024: 3699-3710.
|
42 |
胡升龙, 陈彬, 张开华, 等. 场景结构知识增强的协同显著性目标检测. 计算机工程, 2025, 51 (1): 31- 41.
doi: 10.19678/j.issn.1000-3428.0070064
|
|
HU S L , CHEN B , ZHANG K H , et al. Co-saliency object detection enhanced by scene structure knowledge. Computer Engineering, 2025, 51 (1): 31- 41.
doi: 10.19678/j.issn.1000-3428.0070064
|
43 |
CHEN K Y , LIU C Y , CHEN H , et al. RSPrompter: learning to prompt for remote sensing instance segmentation based on visual foundation model. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62, 4701117.
|
44 |
YUE W X, ZHANG J, HU K, et al. SurgicalSAM: efficient class promptable surgical instrument segmentation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI Press, 2024: 6890-6898.
|
45 |
SUN Y P, CHEN J H, ZHANG S, et al. VRP-SAM: SAM with visual reference prompt[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2024: 23565-23574.
|
46 |
|
47 |
ZHANG Y X, CHENG T H, ZHU L H, et al. EVF-SAM: early vision-language fusion for text-prompted segment anything model[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2406.20076v5.
|
48 |
|
49 |
|
50 |
|
51 |
|
52 |
XU Y S, TANG J Q, MEN A D, et al. EviPrompt: a training-free evidential prompt generation method for segment anything model in medical images[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2311.06400v1.
|
53 |
|
54 |
白宇, 王珺, 冉红雷, 等. 半导体器件内部缺陷标注与检测方法研究. 计算机工程, 2024, 50 (12): 245- 253.
doi: 10.19678/j.issn.1000-3428.0068712
|
|
BAI Y , WANG J , RAN H L , et al. Research on internal defect annotation and detection methods of semiconductor devices. Computer Engineering, 2024, 50 (12): 245- 253.
doi: 10.19678/j.issn.1000-3428.0068712
|
55 |
LENG T A, ZHANG Y M, HAN K, et al. Self-sampling meta SAM: enhancing few-shot medical image segmentation with meta-learning[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Washington D.C., USA: IEEE Press, 2024: 7910-7920.
|
56 |
QI X Y , WU Y F , MAO Y Q , et al. Self-guided few-shot semantic segmentation for remote sensing imagery based on large vision models. Berlin, Germany: Springer, 2024.
|
57 |
HE C, LI K, ZHANG Y, et al. Weakly-supervised concealed object segmentation with sam-based pseudo labeling and multi-scale feature grouping[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2305.11003.
|
58 |
|
59 |
|
60 |
CUI C , DENG R N , LIU Q , et al. All-in-SAM: from weak annotation to pixel-wise nuclei segmentation with prompt-based finetuning. Journal of Physics: Conference Series, 2024, 2722 (1): 012012.
doi: 10.1088/1742-6596/2722/1/012012
|
61 |
|
62 |
WU K , ZHANG J N , PENG H W , et al. TinyViT: fast pretraining distillation for small vision transformers. Berlin, Germany: Springer, 2022.
|
63 |
ZHANG H J, SU Y Y, XU X, et al. Improving the generalization of segmentation foundation model under distribution shift via weakly supervised adaptation[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2024: 23385-23395.
|
64 |
SAHOO P, SINGH A K, SAHA S, et al. A systematic survey of prompt engineering in large language models: techniques and applications[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2402.07927v2.
|
65 |
|
66 |
SUN W X, LIU Z Y, ZHANG Y H, et al. An alternative to WSSS? An empirical study of the Segment Anything Model (SAM) on weakly-supervised semantic segmentation problems[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2305.01586v2.
|
67 |
|
68 |
XU Q, LI J X, HE X J, et al. ESP-MedSAM: efficient self-prompting SAM for universal domain-generalized medical image segmentation[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2407.14153v4.
|
69 |
|
70 |
WANG D, ZHANG J, DU B, et al. SAMRS: scaling-up remote sensing segmentation dataset with segment anything model[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2305.02034.
|
71 |
方乐缘, 旷洋, 刘强, 等. 基于时差提示SAM的遥感变化检测. 信号处理, 2024, 40 (3): 417- 427.
|
|
FANG L Y , KUANG Y , LIU Q , et al. Temporal difference prompted SAM for remote sensing change detection. Journal of Signal Processing, 2024, 40 (3): 417- 427.
|
72 |
ZHANG J, YANG X B, JIANG R, et al. RSAM-Seg: a SAM-based approach with prior knowledge integration for remote sensing image semantic segmentation[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2402.19004v1.
|
73 |
LEE H , KIM K , LEE K . Application of Geo-Segment Anything Model (SAM) scheme to water body segmentation: an experiment study using CAS500-1 images. Korean Journal of Remote Sensing, 2024, 40 (4): 343- 350.
|
74 |
ZHANG X, LIU Y, LIN Y M, et al. UV-SAM: adapting segment anything model for urban village identification[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI Press, 2024: 22520-22528.
|
75 |
XI L D , YU J C , GE D Q , et al. SAM-CFFNet: SAM-based cross-feature fusion network for intelligent identification of landslides. Remote Sensing, 2024, 16 (13): 2334.
doi: 10.3390/rs16132334
|
76 |
GIANNAKIS I, BHARDWAJ A, SAM L, et al. Deep learning universal crater detection using Segment Anything Model (SAM)[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2304.07764v1.
|
77 |
|
78 |
MOENCK K, WENDT A, PRVNTE P, et al. Industrial segment anything—a case study in aircraft manufacturing, intralogistics, maintenance, repair, and overhaul[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2307.12674v1.
|
79 |
|
80 |
LI Z S, HUO D, MEURER M, et al. Efficient cutting tool wear segmentation based on segment anything model[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2407.01211.
|
81 |
|
82 |
CORDTS M, OMRAN M, RAMOS S, et al. The Cityscapes dataset for semantic urban scene understanding[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Washington D. C., USA: IEEE Press, 2016: 3213-3223.
|
83 |
NEUHOLD G, OLLMANN T, BULÒ S R, et al. The Mapillary Vistas dataset for semantic understanding of street scenes[C]// Proceedings of the IEEE International Conference on Computer Vision (ICCV). Washington D. C., USA: IEEE Press, 2017: 5000-5009.
|
84 |
LAKHANI P , MONGAN J , SINGHAL C , et al. The 2021 SIIM-FISABIO-RSNA machine learning COVID-19 challenge: annotation and standard exam classification of COVID-19 chest radiographs. Journal of Digital Imaging, 2023, 36 (1): 365- 372.
|
85 |
LIN T Y , MAIRE M , BELONGIE S , et al. Microsoft COCO: common objects in context. Berlin, Germany: Springer, 2014.
|
86 |
EVERINGHAM M , VAN GOOL L , WILLIAMS C K I , et al. The PASCAL Visual Object Classes (VOC) challenge. International Journal of Computer Vision, 2010, 88 (2): 303- 338.
doi: 10.1007/s11263-009-0275-4
|
87 |
ZHOU B L , ZHAO H , PUIG X , et al. Semantic understanding of scenes through the ADE20K dataset. International Journal of Computer Vision, 2019, 127 (3): 302- 321.
doi: 10.1007/s11263-018-1140-0
|
88 |
MARTIN D, FOWLKES C, TAL D, et al. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics[C]//Proceedings of the 8th IEEE International Conference on Computer Vision. Washington D. C., USA: IEEE Press, 2001: 416-423.
|
89 |
DENG J, DONG W, SOCHER R, et al. ImageNet: a large-scale hierarchical image database[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2009: 248-255.
|
90 |
WANG L J, LU H C, WANG Y F, et al. Learning to detect salient objects with image-level supervision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Washington D. C., USA: IEEE Press, 2017: 3796-3805.
|
91 |
WANG J J, ZHENG Z, MA A L, et al. LoveDA: a remote sensing land-cover dataset for domain adaptive semantic segmentation[EB/OL]. [2024-10-11]. https://arxiv.org/abs/2110.08733v6.
|
92 |
LECLERC S , SMISTAD E , PEDROSA J , et al. Deep learning for segmentation using an open large-scale dataset in 2D echocardiography. IEEE Transactions on Medical Imaging, 2019, 38 (9): 2198- 2210.
doi: 10.1109/TMI.2019.2900516
|
93 |
ZHANG J, FAN D P, DAI Y C, et al. RGB-D saliency detection via cascaded mutual information minimization[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D. C., USA: IEEE Press, 2021: 4318-4327.
|
94 |
TU Z Z , XIA T , LI C L , et al. RGB-T image saliency detection via collaborative graph learning. IEEE Transactions on Multimedia, 2020, 22 (1): 160- 173.
doi: 10.1109/TMM.2019.2924578
|
95 |
QIN X B , DAI H , HU X B , et al. Highly accurate dichotomous image segmentation. Berlin, Germany: Springer, 2022.
|
96 |
FAN D P, JI G P, SUN G L, et al. Camouflaged object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D. C., USA: IEEE Press, 2020: 2777-2787.
|
97 |
VICENTE T F Y , HOU L , YU C P , et al. Large-scale training of shadow detectors with noisily-annotated shadow examples. Berlin, Germany: Springer International Publishing, 2016.
|
98 |
FAN D P , JI G P , XU P , et al. Advances in deep concealed scene understanding. Visual Intelligence, 2023, 1 (1): 16.
doi: 10.1007/s44267-023-00019-6
|
99 |
TAJBAKHSH N , GURUDU S R , LIANG J M . Automated polyp detection in colonoscopy videos using shape and context information. IEEE Transactions on Medical Imaging, 2016, 35 (2): 630- 644.
doi: 10.1109/TMI.2015.2487997
|
100 |
SHUMAILOV I , SHUMAYLOV Z , ZHAO Y R , et al. AI models collapse when trained on recursively generated data. Nature, 2024, 631 (8022): 755- 759.
doi: 10.1038/s41586-024-07566-y
|