DSpace Repository

Development and Validation of a Real-Time YOLOv4-Based Multi-Fruit Detection Model for Autonomous Robotic Harvesting.

Show simple item record

dc.contributor.author Nyamwange Ombuna, Paul
dc.date.accessioned 2026-01-13T09:30:22Z
dc.date.available 2026-01-13T09:30:22Z
dc.date.issued 2025-12
dc.identifier.citation Ombuna, P. N. (2025). Development and Validation of a Real-Time YOLOv4-Based Multi-Fruit Detection Model for Autonomous Robotic Harvesting. en_US
dc.identifier.issn 2583-5300
dc.identifier.uri https://www.doi.org/10.59256/indjcst.20250403017
dc.identifier.uri https://repository.cuk.ac.ke/handle/123456789/1867
dc.description A research article published in the fifth dimension research publication. en_US
dc.description.abstract Background: The global agricultural sector is facing unprecedented challenges with the world's food requirements projected to increase by 70% by 2050, along with persistent labor shortages in the harvesting processes worldwide. Computer vision technology equipped autonomous harvesting machines present a shining vision, but their profitability is entirely dependent upon robust real-time fruit detection capability. Problem Statement: Current fruit detection systems are susceptible to environmental fluctuations, including varying lighting levels, occlusions, and the computational burden of real-time execution within field environments. Current models either are not computationally fast enough to be used in real time or sacrifice accuracy for speed, limiting their realistic use within autonomous harvesting systems. Objectives: In this study, a YOLOv4 deep learning model was trained and evaluated using the Google Open Images Dataset for real-time multi-fruit detection, its accuracy for eight classes of fruit at various environmental conditions was evaluated, and its effectiveness was compared with other detection structures. Methodology: A quantitative experimental approach was employed with 7,700 annotated images from the Google Open Images Dataset split into training (70%), validation (15%), and test (15%) sets. Data augmentation techniques and custom anchor boxes were employed to fine-tune the YOLOv4 architecture, and the performance was evaluated using precision, recall, mean Average Precision (mAP@0.5), and inference speed metrics. Results: The network reached an average mAP of 0.889 across eight fruit classes with precision and recall of 0.85-0.94 and 0.78-0.90, respectively. Real-time speeds of 45 FPS were shown on GPU hardware, significantly higher than Faster R-CNN (5 FPS) while maintaining comparable accuracy. Environmental tests confirmed robust performance in normal lighting with modest degradation under high occlusion and low light. Recommendations: The future steps will include efforts at domain adaptation techniques to bridge the training-deployment gap, utilize model optimization for edge deployment, and conduct field trials on actual real-world robotic harvesting systems for test cases in real-world scenarios en_US
dc.language.iso en en_US
dc.publisher Fifth Dimension Research Publication. en_US
dc.relation.ispartofseries Volume 4, Issue3 (September-December 2025);PP: 89-93.
dc.title Development and Validation of a Real-Time YOLOv4-Based Multi-Fruit Detection Model for Autonomous Robotic Harvesting. en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account