[1] HUANG F C, CHEN K, WETZSTEIN G.The light field stereoscope:immersive computer graphics via factored near-eye light field displays with focus cues[J].ACM Transactions on Graphics, 2015, 34(4):60-72. [2] WANG Y L, LIU F, ZHANG K B, et al.LFNet:a novel bidirectional recurrent convolutional neural network for light-field image super-resolution[J].IEEE Transactions on Image Processing, 2018, 27(9):4274-4286. [3] SUN Q Y, ZHANG S, CHANG S, et al.Multi-dimension fusion network for light field spatial super-resolution using dynamic filters[EB/OL].[2022-02-01].https://arxiv.org/abs/2008.11449. [4] KIM C, ZIMMER H, PRITCH Y, et al.Scene reconstruction from high spatio-angular resolution light fields[J].ACM Transactions on Graphics, 2013, 32(4):73-81. [5] PERRA C, MURGIA F, GIUSTO D.An analysis of 3D point cloud reconstruction from light field images[C]//Proceedings of the 6th International Conference on Image Processing Theory, Tools and Applications.Washington D.C., USA:IEEE Press, 2016:1-6. [6] HONAUER K, JOHANNSEN O, KONDERMANN D, et al.A dataset and evaluation methodology for depth estimation on 4D light fields[C]//Proceedings of Asian Conference on Computer Vision.Berlin, Germany:Springer, 2017:19-34. [7] SHIN C, JEON H G, YOON Y, et al.EPINET:a fully-convolutional neural network using epipolar geometry for depth from light field images[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:4748-4757. [8] LEISTNER T, SCHILLING H, MACKOWIAK R, et al.Learning to think outside the box:wide-baseline light field depth estimation with EPI-shift[C]//Proceedings of International Conference on 3D Vision.Washington D.C., USA:IEEE Press, 2019:249-257. [9] 赵猛, 金一丞, 尹勇.立体显示中双目视差模型和深度感知研究[J].计算机工程, 2011, 37(17):271-273. ZHAO M, JIN Y C, YIN Y.Research on binocular parallax model and depth perception in stereo display[J].Computer Engineering, 2011, 37(17):271-273.(in Chinese) [10] HEBER S, POCK T.Convolutional networks for shape from light field[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2016:3746-3754. [11] HEBER S, YU W, POCK T.Neural EPI-volume networks for shape from light field[C]//Proceedings of IEEE International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2017:2271-2279. [12] HEBER S, YU W, POCK T.U-shaped networks for shape from light field[C]//Proceedings of the British Machine Vision Conference.New York, UK:British Machine Vision Association, 2016:1-12. [13] LUO Y X, ZHOU W H, FANG J P, et al.EPI-patch based convolutional neural network for depth estimation on 4D light field[C]//Proceedings of International Conference on Neural Information.Berlin, Germany:Springer, 2017:642-652. [14] LIU R, YANG C X, SUN W X, et al.StereoGAN:bridging synthetic-to-real domain gap by joint optimization of domain translation and stereo matching[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA.IEEE Press, 2020:12754-12763. [15] ZHU J Y, PARK T, ISOLA P, et al.Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of IEEE International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2020:2242-2251. [16] 王程, 张骏, 高隽.抗高光的光场深度估计方法[J].中国图象图形学报, 2020, 25(12):2630-2646. WANG C, ZHANG J, GAO J.Anti-specular light-field depth estimation algorithm[J].Journal of Image and Graphics, 2020, 25(12):2630-2646.(in Chinese) [17] WANNER S, GOLDLUECKE B.Variational light field analysis for disparity estimation and super-resolution[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(3):606-619. [18] CHEN D D, YUAN L, LIAO J, et al.Stereoscopic neural style transfer[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:6654-6663. [19] LI K Y, ZHANG J, SUN R, et al.EPI-based oriented relation networks for light field depth estimation[EB/OL].[2022-02-01].https://arxiv.org/abs/2007.04538. [20] HUANG G, LIU Z, VAN DER MAATEN L, et al.Densely connected convolutional networks[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2017:2261-2269. [21] 贾瑞明, 李阳, 李彤, 等.多层级特征融合结构的单目图像深度估计网络[J].计算机工程, 2020, 46(12):207-214. JIA R M, LI Y, LI T, et al.Monocular image depth estimation network with multiple level feature fusion structure[J].Computer Engineering, 2020, 46(12):207-214.(in Chinese) [22] GODARD C, AODHA O M, BROSTOW G J.Unsupervised monocular depth estimation with left-right consistency[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2017:6602-6611. [23] RAJ A S.Light-field database creation and depth estimation[EB/OL].[2022-02-01].https://www.vincentqin.tech/posts/light-field-depth-estimation/. [24] LIN T Y, MAIRE M, BELONGIE S, et al.Microsoft COCO:common objects in context[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2014:740-755. [25] SCHILLING H, DIEBOLD M, ROTHER C, et al.Trust your model:light field depth estimation with inline occlusion handling[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:4530-4538. [26] SHENG H, ZHAO P, ZHANG S, et al.Occlusion-aware depth estimation for light field using multi-orientation EPIs[J].Pattern Recognition, 2018, 74(11):587-599. |