[1]MILDENHALL B, SRINIVASAN P P, TANCIK M, et al. NeRF: representing scenes as neural radiance fields for view synthesis[C]//2020 European Conference on Computer Vision. Scottish Event Campus, SEC: Springer, 2020: 405-421.
[2]Barron J T, Mildenhall, B, Tancik, M, et al. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE Press, 2022:5855-5864.
[3]Barron, J T, Mildenhall B, Verbin D, et, al. Mip-nerf 360:Unbounded anti-aliased neural radiance fields[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE Press, 2022:5470-5479.
[4]KERBL B, KOPANAS G, LEIMKUEHLER T, et al. 3D Gaussian Splatting for Real-Time Radiance Field Rendering[J].Los Angeles: ACM Transactions on Graphics, 2023, 42(4): 1-14.
[5]LU T, YU M L, XU L N, et al. Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [C]// 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA:IEEE Press, 2024:20654-20664.
[6]REN K R, JIANG L H, LU T, et al. Octree-GS: Towards Consistent Real-time Rendering with LOD-Structured 3D Gaussians [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025:1-15.
[7]PARK H, RYU G, KIM W. DropGaussian: Structural Regularization for Sparse-view Gaussian Splatting[C]//Proceedings of the Computer Vision and Pattern Recognition Conference. Tennessee, USA:IEEE Press, 2025:21600-21609.
[8]王骞, 张俊华, 王泽彤, 李博. X2S-Net: 基于双平面X线片的脊柱三维重建[J]. 计算机工程, 2025, 51(1): 277-286.
WANG Qian, ZHANG Junhua, WANG Zetong, LI Bo. X2S-Net: Three-Dimensional Reconstruction of Spine Based on Biplanar X-Rays[J]. Computer Engineering, 2025, 51(1): 277-286.
[9]费煜哲, 蔡欣, 赵鸣博, 杨圣豪. 基于隐式表达的服装三维重建[J]. 计算机工程, 2024, 50(5): 220-228.
FEI Yuzhe, CAI Xin, ZHAO Mingbo, YANG Shenghao. Implicit-Expression-based 3D Reconstruction of Clothing[J]. Computer Engineering, 2024, 50(5): 220-228.
[10]宋祺鹏,唐晶磊,辛菁. 基于生长模型的苗期大豆植株三维重建[J]. 计算机工程.
SONG Qipeng,TANG Jinglei,XIN Jing. 3-dimensional Reconstruction for Soybean Plant of Seedling Stage Based on Growth Model[J]. Computer Engineering.
[11]ZHAN C L, ZHANG Y F, LIN Y, et al. RDG-GS: Relative Depth Guidance with Gaussian Splatting for Real-time Sparse-View 3D Rendering. [EB/OL].[2025-1-19]. https://api.semanticscholar.org/CorpusID:275758188.
[12]CHUNG J, OH J T, LEE K M. Depth-Regularized Optimization for 3D Gaussian Splatting in Few-Shot Images[C]//2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Seattle, USA:IEEE Press, 2024: 811-820.
[13]XIONG H, MUTTUKURU S, UPADHYAY R, et al. SparseGS: real-time 360° sparse view synthesis using gaussian splatting. [EB/OL].[2024-05-13].https://arxiv.org/abs/2312.00206.
[14]LI J, ZHANG J, BAI X, et al. DNGaussian: optimizing sparse-view3d gaussian radiance fields with global-local depth normalization. [EB/OL].[2024-03-24].https://arxiv.org/abs/2403.06912.
[15]ZHU Z, FAN Z, JIANG Y, et al. FSGS: real-Time few-shot view synthesis using gaussian splatting. [EB/OL].[2024-6-16].https://arxiv.org/abs/2312.00451.
[16]SCHÖNBERGER J L, FRAHM J M. Structure-from-Motion Revisited[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, Nevada, USA:IEEE Press, 2016: 4104-4113.
[17]RANFTL R, LASINGER K, HAFNER D, et al. Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross-Dataset Transfer[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(3):1623-1637.
[18]HAN L, ZHOU J S, LIU Y S, HAN Z Z. Binocular-Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis. [EB/OL].[2024-8-27].https://api.semanticscholar.org/CorpusID:273549288.
[19]XU W Z, GAO H C, SHEN S H, et al. MVPGS: Excavating Multi-view Priors for Gaussian Splatting from Sparse Input Views [C]//2025 European Conference on Computer Vision. Copenhagen, Denmark: Springer Science, 2025: 203-220.
[20]TAUD H, MAS J F. Multilayer Perceptron (MLP)[M]//CAMACHO OLMEDO M T, PAEGELOW M, MAS J F, ESCOBAR F. Geomatic Approaches for Modeling Land Change Scenarios. Cham: Springer, 2018: 451-455.
[21]BARRON J T, MILDENHALL B, VERBIN D, et al. Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields[C]// 2023 IEEE/CVF International Conference on Computer Vision. Paris, France: IEEE Press, 2023: 19640-19648.
[22]Takikawa T, Evans A , Tremblay J, et al. Variable Bitrate Neural Fields[C]//ACM SIGGRAPH 2022 Conference Proceedings. Vancouver, Canada: ACM, 2022: 41.
[23]HU T, LIU S, CHEN Y, et al. EfficientNeRF - Efficient Neural Radiance Fields[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE Press, 2022: 12892-12901.
[24]GARBIN S J, KOWALSKI M, JOHNSON M, et, al. FastNeRF: High-Fidelity Neural Rendering at 200FPS [C]// 2021 IEEE/CVF International Conference on Computer Vision. New Orleans, USA:IEEE Press, 2021: 14326-14335.
[25]WU G J, YI T R, FANG J M, et al. 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering[C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE Press, 2023: 20310-20320.
[26]YI T R, FANG J M, WANG J J, et, al. GaussianDreamer: Fast Generation from Text to 3D Gaussians by Bridging 2D and 3D Diffusion Models [C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE Press, 2023: 6796-6807.
[27]CHIBANE J, BANS AL A, LAZOVA V, et al. Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA:IEEE Press, 2021: 7907-7916.
[28]NIEMEYER M, BARRON J T, MILDENHALL B, et al. Regnerf: regularizing neural radiance fields for view synthesis from sparse inputs[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE Press, 2022: 5480-5490.
[29]WANG G, CHEN Z, LOY C C, et al. Sparsenerf: distilling depth ranking for few-shot novel view synthesis[C]//2023 IEEE/CVF International Conference on Computer Vision. Paris, France: IEEE Press, 2023: 9065-9076.
[30]YANG J W, PAVONE M, WANG Y. FreeNeRF: Improving Few-Shot Neural Rendering with Free Frequency Regularization[C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE Press , 2023: 8254-8263.
[31]HEDMAN P, SRINIVASAN P P, MILDENHALL B, et al. Baking Neural Radiance Fields for Real-Time View Synthesis[C]//2021 IEEE/CVF International Conference on Computer Vision. New Orleans, USA:IEEE Press, 2021.
[32]DENG K, LIU A, ZHU J Y, et al. Depth-supervised NeRF: Fewer Views and Faster Training for Free[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA:IEEE Press, 2021: 12872-12881.
[33]PICCINELLI L, YANG Y H, SAKARIDIS C, et al. UniDepth: Universal Monocular Metric Depth Estimation[C]//2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA:IEEE Press, 2024: 10106-10116.
[34]LIN L F, LU R F, CHEN Q, et al. VGNC: Reducing the Overfitting of Sparse-view 3DGS via Validation-guided Gaussian Number Control. [EB/OL]. [2025-4-20]. https://arxiv.org/abs/2504.14548.
[35]HAN H, WU Y L, Deng C, et al. FatesGS: Fast and Accurate Sparse-View Surface Reconstruction using Gaussian Splatting with Depth-Feature Consistency. [EB/OL].[2025-1-8]. arxiv.org/pdf/2501.04628
[36]YOO J C, HAN T H. Fast Normalized Cross-Correlation[J]. Circuits Syst. Signal Process., 2009, 28(6):819-843.
[37]XIANG L T, ZHENG H P, HUANG Y T, et al. PointGS: Point Attention-Aware Sparse View Synthesis with Gaussian Splatting. [EB/OL].[2025-6-12].https://api.semanticscholar.org/CorpusID:279318858.
[38]Rajaei K and Giryes R. DIP-GS: Deep Image Prior For Gaussian Splatting Sparse View Recovery.[EB/OL].[2025-8-10]. https://arxiv.org/abs/2508.07372.
[39]Sun X Y, Chen R N, Gong M M, et al. Intern-GS: Vision Model Guided Sparse-View 3D Gaussian Splatting. [EB/OL].[2025-8-10]. https://openreview.net/forum?id=H31Js3M96Z.
[40]Yu A , Ye, A, Tancik M, et al. pixelNeRF: Neural Radiance Fields from One or Few Images[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA:IEEE Press, 2021: 4576-4585.
[41]Wang Q, Wang Z, Genova K, et al. IBRNet: Learning Multi-View Image-Based Rendering[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA:IEEE Press, 2021:4688-4697.
[42]Chen A P, Xu Z X, Zhao F Q, et al. MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo[C]\\2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada:IEEE Press, 2021:14104-14113.
|