[1] Wallace G K. The JPEG still picture compression
standard[J]. Communications of the ACM, 1991, 34(4):
30-44.
[2] Taubman D S, Marcellin M W, Rabbani M. JPEG2000:
Image compression fundamentals, standards and
practice[J]. Journal of Electronic Imaging, 2002, 11(2):
286-287.
[3] Fabrice Bellard. Bpg image format, 2014. https://
bellard.org/bpg/. 1
[4] 叶宗苗. 基于深度学习的端到端智能图像压缩研究
[D].杭州电子科技大学,2022.DOI:10.27075/d.cnki.ghz
dc.202 2.000096.
(Ye Zongmia. End-to-End Intelligent Image Compressio
n Based on Deep Learning[D]. Hangzhou Dianzi Unive
rsi, 2022.DOI:10.27075/d.cnki.ghz dc.202 2.000096.
[5] Lin F, Sun H, Liu J, et al. Multistage spatial context
models for learned image compression[C]//ICASSP
2023-2023 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP). IEEE, 2023:
1-5.
[6] Wang D, Yang W, Hu Y, et al. Neural data-dependent
transform for learned image compression[C]//
Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition. 2022: 17379-17388.
[7] Fu H, Liang F, Lin J, et al. Learned image compression
with discretized gaussian-laplacian-logistic mixture
model and concatenated residual modules[J]. arXiv
preprint arXiv:2107.06463, 2021.
[8] Cheng Z, Sun H, Takeuchi M, et al. Learned image
compression with discretized gaussian mixture
likelihood s and attention modules[C]//Proceedings of
the IEEE/CVF conference on computer vision and
pattern recognition. 2020: 7939-7948.
[9] Ballé J, Laparra V, Simoncelli E P. End-to-end
optimized image compression[J]. arXiv preprint
arXiv:1611.01704, 2016.
[10] Ballé J, Laparra V, Simoncelli E P. Density modeling of
images using a generalized normalization transformation
[J]. arXiv preprint arXiv:1511.06281, 2015.
[11] Minnen D, Singh S. Channel-wise autoregressive
entropy models for learned image compression[C]//2020
IEEE International Conference on Image Processing
(ICIP). IEEE, 2020: 3339-3343.
[12] Guo Z, Zhang Z, Feng R, et al. Causal contextual
prediction for learned image compression[J]. IEEE
Transactions on Circuits and Systems for Video
Technology, 2021, 32(4): 2329-2341.
[13] He D, Yang Z, Peng W, et al. Elic: Efficient learned
image compression with unevenly grouped
space-channel contextual adaptive
coding[C]//Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition. 2022:
5718-5727.
[14] Mentzer F, Toderici G D, Tschannen M, et al.
High-fidelity generative image compression[J].
Advances in Neural Information Processing Systems,
2020, 33: 11913-11924.
[15] Xie Y, Cheng K L, Chen Q. Enhanced invertible
encoding for learned image
compression[C]//Proceedings of the 29th ACM
international conference on multimedia. 2021: 162-170.
[16] 皇甫晓瑛,钱惠敏,黄敏.结合注意力机制的深度神经网
络综述[J].计算机与现代化,2023(02):40-49+57.
(HUANGFU Xiao-ying,QIAN Hui-min,HUANG Min.
A Review of Deep Neural Networks Combined with
Attention Mechanism[J]. JISUANJI YU XIANDAIHUA,
2023(02):40-49+57.)
[17] Zhao L, Bai H, Wang A, et al. Learning a virtual codecbased on deep convolutional neural network to compress
image[J]. Journal of Visual Communication and Image
Representation, 2019, 63: 102589.
[18] Vaswani A, Shazeer N, Parmar N, et al. Attention is all
you need[J]. Advances in neural information processing
systems, 2017, 30.
[19] Carion N, Massa F, Synnaeve G, et al. End-to-end object
detection with transformers[C]//European conference on
computer vision. Cham: Springer International Publishin
g, 2020: 213-229.
[20] Zou R, Song C, Zhang Z. The devil is in the details:
Window-based attention for image compression[C]//
Proceedings of the IEEE/CVF conference on computer
vision and pattern recognition. 2022: 17492-17501.
[21] Qian Y, Lin M, Sun X, et al. Entroformer: A
transformer-based entropy model for learned image
compression[J]. arXiv preprint arXiv:2202.05492, 2022.
[22] Kim J H, Heo B, Lee J S. Joint global and local
hierarchical priors for learned image compression[C]//
Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition. 2022: 5992-6001.
[23] Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic
models[J]. Advances in neural information processing
systems, 2020, 33: 6840-6851.
[24] He D, Zheng Y, Sun B, et al. Checkerboard context
model for efficient learned image
compression[C]//Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern
Recognition. 2021: 14771-14780.
[25] Kingma D P, Ba J. Adam: A method for stochastic
optimization[J]. arXiv preprint arXiv:1412.6980, 2014.
[26] Kodak E. Kodak lossless true color image suite
(PhotoCD PCD0992) [J]. URL http://r0k.
us/graphics/kodak, 1993, 6.
[27] Asuni N, Giachetti A. TESTIMAGES: a Large-scale
Archive for Testing Visual Devices and Basic Image
Processing Algorithms[C]//STAG. 2014: 63-70.
[28] Toderici G, Shi W, Timofte R, et al. Workshop and
challenge on learned image compression [C]//Proceed
ings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition. 2020.
[29] Ballé J, Minnen D, Singh S, et al. Variational image
compression with a scale hyperprior[J]. arXiv preprint
arXiv:1802.01436, 2018.
[30] Chen F, Xu Y, Wang L. Two-stage octave residual
network for end-to-end image compression[C]// Proceed
ings of the AAAI Conference on Artificial Intelligence.
2022, 36(4): 3922-3929.
[31] He D, Yang Z, Peng W, et al. Elic: Efficient learned
image compression with unevenly grouped
space-channel contextual adaptive
coding[C]//Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition. 2022:
5718-5727.
[32] Minnen D, Ballé J, Toderici G D. Joint autoregressive
and hierarchical priors for learned image compression[J].
Advances in neural information processing systems,
2018, 31.
[33] Joint Video Experts Team. Vvc official test model vtm.
2021.
[34] Luo W, Li Y, Urtasun R, et al. Understanding the
effective receptive field in deep convolutional neural
networks[J]. Advances in neural information processing
systems, 2016, 29.
[35] Liu J, Sun H, Katto J. Learned image compression with
mixed transformer-cnn architectures[C]//Proceedings of
the IEEE/CVF Conference on Computer Vision and
Pattern Recognition. 2023: 14388-14397.
|