1 |
张涵钰, 李振波, 李蔚然, 等. 基于机器视觉的水产养殖计数研究综述. 计算机应用, 2023, 43 (9): 2970- 2982.
|
|
ZHANG H Y , LI Z B , LI W R , et al. Review of research on aquaculture counting based on machine vision. Journal of Computer Applications, 2023, 43 (9): 2970- 2982.
|
2 |
LI J , SUN J N , CUI X R , et al. Automatic counting method of fry based on computer vision. IEEJ Transactions on Electrical and Electronic Engineering, 2023, 18 (7): 1151- 1159.
doi: 10.1002/tee.23821
|
3 |
LI W R , ZHU Q , ZHANG H Y , et al. A lightweight network for portable fry counting devices. Applied Soft Computing, 2023, 136, 110140.
doi: 10.1016/j.asoc.2023.110140
|
4 |
REN S Q , HE K M , GIRSHICK R , et al. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39 (6): 1137- 1149.
doi: 10.1109/TPAMI.2016.2577031
|
5 |
CAI Z W, VASCONCELOS N. Cascade R-CNN: delving into high quality object detection[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2018: 6154-6162.
|
6 |
LIU W , ANGUELOV D , ERHAN D , et al. SSD: single shot MultiBox detector. Berlin, Germany: Springer, 2016.
|
7 |
|
8 |
REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2016: 779-788.
|
9 |
CAI K W , MIAO X Y , WANG W , et al. A modified YOLOv3 model for fish detection based on MobileNetv1 as backbone. Aquacultural Engineering, 2020, 91, 102117.
doi: 10.1016/j.aquaeng.2020.102117
|
10 |
|
11 |
LI X, TANG Y H, GAO T W. Deep but lightweight neural networks for fish detection[C]//Proceedings of OCEANS'17. Washington D. C., USA: IEEE Press, 2017: 1-5.
|
12 |
SZEGEDY C, IOFFE S, VANHOUCKE V, et al. Inception-v4, inception-ResNet and the impact of residual connections on learning[C]//Proceedings of the 31st AAAI Conference on Artificial Intelligence. New York, USA: ACM Press, 2017: 4278-4284.
|
13 |
RAZA K , HONG S . Fast and accurate fish detection design with improved YOLO-v3 model and transfer learning. International Journal of Advanced Computer Science and Applications, 2020, 11 (2): 516- 525.
|
14 |
FENG D C , XIE J F , LIU T L , et al. Fry counting models based on attention mechanism and YOLOv4-tiny. IEEE Access, 2022, 10, 217854- 217866.
|
15 |
|
16 |
黎袁富, 杜家豪, 莫家浩, 等. 基于YOLOX的鱼苗检测与计数. 电子元器件与信息技术, 2022, 6 (5): 192- 194.
|
|
LI Y F , DU J H , MO J H , et al. Fry detection and counting based on YOLOX. Electronic Components and Information Technology, 2022, 6 (5): 192- 194.
|
17 |
|
18 |
ZHANG H Y , LI W R , QI Y Y , et al. Dynamic fry counting based on multi-object tracking and one-stage detection. Computers and Electronics in Agriculture, 2023, 209, 107871.
doi: 10.1016/j.compag.2023.107871
|
19 |
刘康. 基于深度学习的鱼苗自动计数方法的研究与实现[D]. 马鞍山: 安徽工业大学, 2021.
|
|
LIU K. Research and implementation of automatic counting method of fry based on deep learning[D]. Maanshan: Anhui Universit of Technology, 2021. (in Chinese)
|
20 |
CHEN J R, KAO S H, HE H, et al. Run, don't walk: chasing higher FLOPS for faster neural networks[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2023: 12021-12031.
|
21 |
HOWARD A G, ZHU M L, CHEN B, et al. MobileNets: efficient convolutional neural networks for mobile vision applications[EB/OL]. [2023-12-10]. https://arxiv.org/abs/1704.04861.
|
22 |
ZHENG Z H , WANG P , LIU W , et al. Distance-IoU loss: faster and better learning for bounding box regression. Artificial Intelligence, 2020, 34 (7): 12993- 13000.
|
23 |
ZHANG Y F , REN W Q , ZHANG Z , et al. Focal and efficient IOU loss for accurate bounding box regression. Neurocomputing, 2022, 506, 146- 157.
doi: 10.1016/j.neucom.2022.07.042
|
24 |
|
25 |
|
26 |
WANG C Y, BOCHKOVSKIY A, LIAO H M. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2023: 7464-7475.
|
27 |
|
28 |
LIU Z, MAO H Z, WU C Y, et al. A ConvNet for the 2020s[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2022: 11976-11986.
|
29 |
HAN K, WANG Y H, TIAN Q, et al. GhostNet: more features from cheap operations[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE Press, 2020: 1580-1589.
|
30 |
|
31 |
MA N N, ZHANG X Y, ZHENG H T, et al. ShuffleNet V2: practical guidelines for efficient CNN architecture design[C]//Proceedings of ECCV'18. Berlin, Germany: Springer, 2018: 122-138.
|
32 |
|
33 |
|
34 |
LI J F, WEN Y, HE L H. SCConv: spatial and channel reconstruction convolution for feature redundancy[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2023: 6153-6162.
|
35 |
ZHANG X, SONG Y, SONG T, et al. AKConv: convolutional kernel with arbitrary sampled shapes and arbitrary number of parameters[EB/OL]. [2023-12-10]. https://arxiv.org/abs/2311.11587.
|
36 |
ZHANG X, LIU C, YANG D G, et al. RFAConv: innovating spatial attention and standard convolutional operation[EB/OL]. [2023-12-10]. https://arxiv.org/abs/2304.03198.
|