2023, Volume 1, Issue 1: 9-14. DOI: 10.00000/TIOT.2023.100002

Research Article | Feature Paper | 12 June 2023
1 School of Informatics, Huazhong Agricultural University, Wuhan 430070, China
2 School of Computer, BaoJi University of Arts and Sciences, Baoji 721016, China
* Corresponding Author
Received: 03 May 2023, Accepted: 09 June 2023, Published: 12 June 2023

Abstract
The identification of immature apples is a key technical link to realize automatic real-time monitoring of orchards, expert decision-making, and realization of orchard output prediction. In the orchard scene, the reflection caused by light and the color of immature apples are highly similar to the leaves, especially the obscuration and overlap of fruits by leaves and branches, which brings great challenges to the detection of immature apples. This paper proposes an improved YOLOv3 detection method for immature apples in the orchard scene. Use CSPDarknet53 as the backbone network of the model, introduce the CIOU target frame regression mechanism, and combine with the Mosaic algorithm to improve the detection accuracy. For the data set with severely occluded fruits, the F1 and mAP of the immature apple recognition model proposed in this article are 0.652 and 0.675, respectively. The inference speed for a single 416×416 picture is 12 ms, the detection speed can reach 83 frames/s on 1080ti, and the inference speed is 8.6 ms. Therefore, for the severely occluded immature apple data set, the method proposed in this article has a significant detection effect, and provides a feasible solution for the automation and mechanization of the apple industry.

Graphical Abstract

Keywords
Orchard scene
Immature apple
Improved YOLOv3
Mosaic algorithm
CIOU target frame regression mechanism

Cite This Article

References

[1]Musacchi, S., & Serra, S. (2018). Apple fruit quality: Overview on pre-harvest factors. Scientia Horticulturae, 234 , 409–430.

[2]Tian, Y., Yang, G., Wang, Z., Wang, H., Li, E., & Liang, Z. (2019). Apple detection during different growth stages in orchards using the improved yolo-v3 model. Computers and Electronics in Agriculture, 157, 417–426.

[3]Liu, Q., Cheng, L., Jia, A. L., & Liu, C. (2021). Deep reinforcement learning for communication flow control in wireless mesh networks. IEEE Network, 35(2), 112-119.

[4]Huang, Y., Cheng, L., Xue, L., Liu, C., Li, Y., Li, J., & Ward, T. (2021). Deep adversarial imitation reinforcement learning for QoS-aware cloud job scheduling. IEEE Systems Journal, 16(3), 4232-4242.

[5]Cheng, L., Wang, Y., Liu, Q., Epema, D. H., Liu, C., Mao, Y., & Murphy, J. (2021). Network-aware locality scheduling for distributed data operators in data centers. IEEE Transactions on Parallel and Distributed Systems, 32(6), 1494-1510.

[6]Cheng, F., Huang, Y., Tanpure, B., Sawalani, P., Cheng, L., & Liu, C. (2022). Cost-aware job scheduling for cloud instances using deep reinforcement learning. Cluster Computing, 1-13.

[7]Li, J., Tong, X., Liu, J., & Cheng, L. (2023). An Efficient Federated Learning System for Network Intrusion Detection. IEEE Systems Journal.

[8]Huang, H., Xue, X., Liu, C., Wang, Y., Luo, T., Cheng, L., ... & Li, X. (2023). Statistical Modeling of Soft Error Influence on Neural Networks. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[9]Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 580–587). Columbus, OH, USA.

[10]Girshick, R. B. (2015). Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1440–1448). Santiago, Chile: IEEE Computer Society.

[11]Ren, S., He, K., Girshick, R. B., & Sun, J. (2015). Faster R-CNN: towards real-time object detection with region proposal networks. arXiv.

[12]Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., & Berg, A. C. (2016). Ssd: Single shot multibox detector. In Proceedings of the 14th European Conference (Vol. 9905, pp. 21–37). Amsterdam, The Netherlands.

[13]Redmon, J., Divvala, S. K., Girshick, R. B., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 779–788). Las Vegas, NV, USA: IEEE Computer Society.

[14]Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv.

[15]Torrey, L., & Shavlik, J. (2010). Transfer learning. In Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques (pp. 242–264). IGI Global

[16]Zheng, Z., Wang, P., Liu, W., Li, J., Ye, R., & Ren, D. (2020). Distance-iou loss: Faster and better learning for bounding box regression. In Proceedings of the 34th AAAI Conference on Artificial Intelligence (pp. 12993–13000). New York, NY, USA: AAAI Press.

[17]Chen, J., Li, T., Zhang, Y., You, T., Lu, Y., Tiwari, P., & Kumar, N. (2023). Global-and-Local Attention-Based Reinforcement Learning for Cooperative Behaviour Control of Multiple UAVs. IEEE Transactions on Vehicular Technology.


Publisher's Note
IECE stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions
IECE or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Copyright © 2023 Institute of Emerging and Computer Engineers INC. All rights reserved.