Automatic Crop Seed Counting Method Based on YOLOX Model
-
摘要:
种子计数是获取作物种子千粒质量指标时关键而又烦琐的步骤。目前种子计数一般通过人工和千粒质量测量仪器实现,然而人工计数效率低,千粒质量测量仪器成本高、不易携带。以手机拍摄的6种常见作物种子图像构建数据集,在YOLOX模型的基础上引入注意力机制改进损失函数提出YOLOX-P模型,实现种子自动计数。结果表明,YOLOX-P相比YOLOX模型参数量仅增加0.09 M,
mAP 改进0.74个百分点,达到99.38%;模型在显存6 GB的NVIDIA GeForce RTX 2060显卡上的推理时间为18.68 ms,适宜部署在移动端。提出的模型显著改善千粒质量测定工作的效率和效果。Abstract:Seed counting is the most critical and tedious step in acquisition of kilograin weight index of crop seeds.At present, seed counting is generally achieved by manual and specialized equipment.But manual counting efficiency is low and specialized equipment are expensive and not easy to carry.Datasets were constructed using images of six different common crop seeds taken by mobile phone.Based on YOLOX model, attention mechanism was introduced and loss function was improved to obtain the YOLOX-P model, which realized automatic counting of seeds.Results showed that YOLOX-P only increased 0.09 M compared with YOLOX model parameters,
mAP improved 0.74 percentage points and reached 99.38%.Reasoning time of the model on NVIDIA GeForce RTX 2060 graphics card with 6 GB video memory was 18.68 ms, which was suitable for deployment on the mobile , and significantly improves efficiency and effect of measuring kilograin weight.-
Keywords:
- kilograin weight /
- seed count /
- deep learning /
- object detection /
- YOLOX
-
表 1 YOLOX检测6类种子的指标数据
Table 1. Indicator data for YOLOX detection of six types of seeds
单位:% 指标种类 Precision AP 葵花籽 98.80 99.69 西瓜籽 99.40 99.78 红豆 95.61 99.03 小麦 91.73 93.71 花生 100.00 100.00 玉米 97.54 99.65 表 2 YOLOX与YOLOX-P对比数据
Table 2. YOLOX versus YOLOX-P data
模型 YOLOX YOLOX-P 玉米AP/% 99.65 99.88 红豆AP/% 99.03 99.02 小麦AP/% 93.71 97.88 mAP/% 98.64 99.38 模型参数/M 8.94 9.03 推理时间/ ms 16.59 18.68 表 3 消融试验各个指标数据
Table 3. Indicator data for ablation study
CBAM CIOU Precision/% 小麦AP/% mAP/% 91.73 93.71 98.64 √ 93.66 96.36 99.16 √ 94.07 97.70 99.35 √ √ 95.24 97.88 99.38 表 4 CenterNet、YOLOv4-M1和YOLOX-P对比数据
Table 4. Comparison data of CenterNet、YOLOv4-M1 and YOLOX-P
模型 mAP/% 模型参数/M CenterNet 73.95 32.66 YOLOv4-M1 84.35 12.29 YOLOX-P 99.38 9.03 表 5 YOLOX-P对每组玉米计数的数据
Table 5. YOLOX-P data on corn count per group
组次 真值 第1次预测 第2次预测 第3次预测 第1组 100 100 100 100 第2组 200 200 200 200 第3组 300 300 300 300 第4组 400 401 400 400 第5组 500 500 499 500 第6组 600 600 600 599 第7组 700 701 699 700 第8组 800 800 802 801 第9组 900 901 902 899 第10组 1000 1001 1000 1002 -
[1] FENG Ao,LI Hongxiang,LIU Zixi,et al.Research on a rice counting algorithm based on an improved MCNN and a density map[J].Entropy,2021,23(6):721. doi: 10.3390/e23060721 [2] 吕泽鑫,郭慧东,伍恒,等.光电式数种器的设计与试验[J].塔里木大学学报,2019,31(1):84-88. doi: 10.3969/j.issn.1009-0568.2019.01.012LV Zexin,GUO Huidong,WU Heng,et al.Design and test of photoelectric counters type[J].Journal of Tarim University,2019,31(1):84- 88. doi: 10.3969/j.issn.1009-0568.2019.01.012 [3] 张霖,赵祚喜,可欣荣,等.压电式种子计数系统[J].农业机械学报,2011,42(8):41-45.ZHANG Lin,ZHAO Zuoxi,KE Xinrong,et al.Seed-counting system design using piezoelectric sensor[J].Transactions of the Chinese Society for Agricultural Machinery,2011,42(8):41-45. [4] 胡正方,向阳,熊瑛,等.基于机器视觉的千粒质量测量仪的设计与试验[J].湖南农业大学学报(自然科学版),2021,47(4):476-481.HU Zhengfang,XIANG Yang,XIONG Ying,et al.Design and test of 1000-grain weight rapid measuring instrument based on machine vision[J].Journal of Hunan Agricultural University (Natural Science Edition),2021,47(4):476-481. [5] LIN Zhe,GUO Wenxuan.Cotton stand counting from unmanned aerial system imagery using MobileNet and CenterNet deep learning models[J].Remote Sensing,2021,13(14): 2822 . doi: 10.3390/RS13142822 [6] HOWARD A G, ZHU M, CHEN B, et al.Mobilenets: efficient convolutional neural networks for mobile vision applications[J].arXiv e-prints, 2017.DOI: 10.48550/arXiv.1704.04861. [7] DUAN K, BAI S, XIE L, et al.CenterNet: Keypoint triplets for object detection[J].arXiv e-prints, 2019.DOI: 10.48550/arXiv.1904.08189. [8] JANOWSKI A,KAMIERCZAK R,KOWALCZYK C,et al.Detecting apples in the wild:potential for harvest quantity estimation[J].Sustainability,2021,13(14): 8054 . doi: 10.3390/su13148054 [9] REDMON J, DIVVALA S K, GIRSHICK R B, et al.You only look once: unified, real-time object detection[C]// Computer Vision & Pattern Recognition, 2016.DOI: 10.1109/CVPR.2016.91. [10] 朱学岩,张新伟,顾梦梦,等.基于无人机可见光图像的云杉计数方法[J].林业工程学报,2021,6(4):140-146.ZHU Xueyan,ZHANG Xinwei,GU Mengmeng,et al.Spruce counting method based on UAV visible images[J].Journal of Forestry Engineering,2021,6(4):140-146. [11] REDMON J, FARHADI A.Yolov3: an incremental improvement[J].arXiv e-prints, 2018.DOI: 10.48550/arXiv.1804.02767. [12] 王超,张运楚,孙绍涵,等.改进YOLOv5算法的钢筋端面检测[J].计算机系统应用,2022,31(4):68-80.WANG Chao,ZHANG Yunchu,SUN Shaohan,et al.Steel-bar end face detection based on improved YOLOv5 Algorithm[J].Computer System & Applications,2022,31(4):68-80. [13] GE Z, LIU S, WANG F, et al.Yolox: exceeding YOLO series in 2021[J].arXiv e-prints, 2021.DOI: 10.48550/arXiv.2107.08430. [14] BOCHKOVSKIY A, WANG C Y, LIAO H Y M.Yolov4: optimal speed and accuracy of object detection[J].arXiv preprint, 2020.DOI: 10.48550/arXiv.2004.10934. [15] REN S,HE K,GIRSHICK R,et al.Faster R-CNN:towards real-time object detection with region proposal networks[J].IEEE Transactions on Pattern Analysis & Machine Intelligence,2017,39(6): 1137 - 1149. [16] LIU S, QI L, QIN H, et al.Path aggregation network for instance segmentation[C]// Proceedings of the IEEE conference on computer vision and pattern recognition, 2018: 8 759-8 768. [17] WOO S, PARK J, LEE J Y, et al.Cbam: convolutional block attention module[C]// Proceedings of the European conference on computer vision (ECCV), 2018: 3-19. [18] ZHENG Z, WANG P, LIU W, et al.Distance-IoU loss: Faster and better learning for bounding box regression[C]// Proceedings of the AAAI conference on artificial intelligence, 2020, 34(7): 12 993-13 000.