Performance Analysis of Deep Learning-based Object Detectors on Raspberry Pi for Detecting Melon Leaf Abnormality

Hanif Rahmat, Sri Wahjuni, Hendra Rahmawan

Abstract


Melon requires intensive treatment with a high cost of maintenance. Digital image processing with deep learning can help handle diseases in melon plants efficiently. Deep-learning-based object detection has significantly better accuracy than the traditional one. However, the deep-learning-based approach leads to high computational and storage resources consumption. Speed and accuracy become tradeoffs to deal with its implementation on devices with limited computing capabilities like Raspberry Pi. This study comparatively analyzes deep-learning-based object detection algorithm performance implemented on a limited computing device, namely Raspberry Pi. The detected object in this study is melon leaves which are classified into two categories, namely abnormal and normal. The experiment was conducted using Faster R-CNN, Single Shot Multibox Detection (SSD), and YOLOv3. The results showed that Faster R-CNN had the highest mAP (~49 %) that ran ~2.5 seconds for an image but had the highest resource usage. Since accuracy is more important than time complexity in melon leaf detection, Faster R-CNN can be recommended as the best object detection algorithm to implement on Raspberry Pi. However, SSD is a fast algorithm with considerable accuracy for real-time detection. In addition, it had not only fast computational time, but SSD MobileNetV2 also spent the least resource usage. Although YOLOv3 had a significantly better running time (0.5 s) which made YOLOv3 the fastest algorithm, it had too low mAP (below 20%). Therefore, YOLOv3 is not recommended for melon leaf abnormality detection since it can allow more detection errors to occur.

Keywords


Faster R-CNN; melon; object detection; Raspberry Pi; SSD; YOLOv3.

Full Text:

PDF

References


B. S. Daryono and S. D. Maryanto, Keanekaragaman dan Potensi Sumber Daya Genetik Melon. Yogyakarta: Gadjah Mada University Press, 2017.

Q. Zhao et al., “Biocontrol of Fusarium wilt disease for Cucumis melo melon using bio-organic fertilizer,” Appl. Soil Ecol., vol. 47, no. 1, pp. 67–75, Jan. 2011, doi: 10.1016/j.apsoil.2010.09.010.

S. Dinc et al., “the Rootstock Effects on Agronomic and Biochemical Quality Properties of Melon Under Water Stress,” Fresenius Environ. Bull., vol. 27, no. 7, pp. 5008–5021, 2018.

M. J. de Santana, G. de A. Bocate, M. A. Sgobi, S. S. de Souza, and T. T. B. Valeriano, “Irrigation management of muskmelon with tensiometry,” Rev. Agrogeoambiental, vol. 9, no. 3, 2017, doi: 10.18406/2316-1817v9n32017965.

A. Balliu and G. Sallaku, “Early production of melon, watermelon and squashes in low tunnels,” in Good Agricultural Practices for greenhouse vegetable production in the South East European countries, 230th ed., W. Baudoin, A. Nersisyan, A. Shamilov, A. Hodder, and D. Gutierrez, Eds. Rome: Food and Agriculture Organization of the United Nations, 2017, pp. 341–351.

S. P. Mohanty, D. P. Hughes, and M. Salathé, “Using Deep Learning for Image-Based Plant Disease Detection,” Front. Plant Sci., vol. 7, Sep. 2016, doi: 10.3389/fpls.2016.01419.

S. D. Khirade and A. B. Patil, “Plant Disease Detection Using Image Processing,” in 2015 International Conference on Computing Communication Control and Automation, Feb. 2015, pp. 768–771, doi: 10.1109/ICCUBEA.2015.153.

D. S. Trigueros, L. Meng, and M. Hartnett, “Face Recognition: From Traditional to Deep Learning Methods,” Oct. 2018, Accessed: Nov. 04, 2020. [Online]. Available: http://arxiv.org/abs/1811.00116.

A. Ramcharan et al., “A Mobile-Based Deep Learning Model for Cassava Disease Diagnosis,” Front. Plant Sci., vol. 10, Mar. 2019, doi: 10.3389/fpls.2019.00272.

M. Arsenovic, M. Karanovic, S. Sladojevic, A. Anderla, and D. Stefanovic, “Solving Current Limitations of Deep Learning Based Approaches for Plant Disease Detection,” Symmetry (Basel)., vol. 11, no. 7, p. 939, Jul. 2019, doi: 10.3390/sym11070939.

Y. Zhong, J. Gao, Q. Lei, and Y. Zhou, “A Vision-Based Counting and Recognition System for Flying Insects in Intelligent Agriculture,” Sensors, vol. 18, no. 5, p. 1489, May 2018, doi: 10.3390/s18051489.

L. Ahmad and F. Nabi, Agriculture 5.0: Artificial Intelligence, IoT, and Machine Learning. Florida: CRC Press, 2021.

D. Foley and R. O’Reilly, “An evaluation of convolutional neural network models for object detection in images on low-end devices,” in CEUR Workshop Proceedings, 2018, vol. 2259, pp. 350–361, [Online]. Available: http://ceur-ws.org/Vol-2259/aics_32.pdf.

Y. He, H. Zeng, Y. Fan, S. Ji, and J. Wu, “Application of Deep Learning in Integrated Pest Management: A Real-Time System for Detection and Diagnosis of Oilseed Rape Pests,” Mob. Inf. Syst., vol. 2019, 2019, doi: 10.1155/2019/4570808.

U. Alganci, M. Soydas, and E. Sertel, “Comparative research on deep learning approaches for airplane detection from very high-resolution satellite images,” Remote Sens., vol. 12, no. 3, p. 458, Feb. 2020, doi: 10.3390/rs12030458.

N. D. Nguyen, T. Do, T. D. Ngo, and D. D. Le, “An Evaluation of Deep Learning Methods for Small Object Detection,” J. Electr. Comput. Eng., vol. 2020, 2020, doi: 10.1155/2020/3189691.

M. Li, Z. Zhang, L. Lei, X. Wang, and X. Guo, “Agricultural greenhouses detection in high‐resolution satellite images based on convolutional neural networks: Comparison of faster R‐CNN, YOLO v3 and SSD,” Sensors (Switzerland), vol. 20, no. 17, pp. 1–14, 2020, doi: 10.3390/s20174938.

V. Saiz-Rubio and F. Rovira-Más, “From smart farming towards agriculture 5.0: A review on crop data management,” Agronomy, vol. 10, no. 2, p. 207, Feb. 2020, doi: 10.3390/agronomy10020207.

Wizyoung, “Complete YOLOv3 TensorFlow implementation.” [Online]. Available: https://github.com/wizyoung/YOLOv3_TensorFlow.

TensorFlow, “Models and examples built with TensorFlow.” [Online]. Available: https://github.com/tensorflow/models/tree/r1.13.0.

Y. Wang, C. Wang, and H. Zhang, “Combining a single shot multibox detector with transfer learning for ship detection using sentinel-1 sar images,” Remote Sens. Lett., vol. 9, no. 8, pp. 780–788, Aug. 2018, doi: 10.1080/2150704X.2018.1475770.

V. Ponnusamy, A. Coumaran, A. S. Shunmugam, K. Rajaram, and S. Senthilvelavan, “Smart Glass: Real-Time Leaf Disease Detection using YOLO Transfer Learning,” in Proceedings of the 2020 IEEE International Conference on Communication and Signal Processing, ICCSP 2020, Jul. 2020, pp. 1150–1154, doi: 10.1109/ICCSP48568.2020.9182146.

M. Buzzy, V. Thesma, M. Davoodi, and J. M. Velni, “Real-time plant leaf counting using deep object detection networks,” Sensors (Switzerland), vol. 20, no. 23, pp. 1–14, Dec. 2020, doi: 10.3390/s20236896.

Github, “TensorFlow detection model zoo,” GitHub, 2020. https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md (accessed Mar. 07, 2021).

J. Redmon and A. Farhadi, “YOLO: Real-Time Object Detection.” https://pjreddie.com/darknet/yolo/ (accessed Mar. 07, 2021).

B. Zoph, E. D. Cubuk, G. Ghiasi, T.-Y. Lin, J. Shlens, and Q. V. Le, “Learning Data Augmentation Strategies for Object Detection,” 2019, Accessed: May 31, 2020. [Online]. Available: http://arxiv.org/abs/1906.11172.

S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Jun. 2017, doi: 10.1109/TPAMI.2016.2577031.

W. Liu et al., “SSD: Single shot multibox detector,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9905 LNCS, pp. 21–37, Dec. 2016, doi: 10.1007/978-3-319-46448-0_2.

J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” 2018, Accessed: May 27, 2020. [Online]. Available: http://arxiv.org/abs/1804.02767.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge: MIT Press, 2016.

J. Brownlee, Better Deep Learning. Train Faster, Reduce Overfitting, and Make Better Predictions, V1.8., vol. 1, no. 2. 2018.

A. Godil, R. Bostelman, W. Shackleford, T. Hong, and M. Shneier, “Performance Metrics for Evaluating Object and Human Detection and Tracking Systems,” Gaithersburg, MD, Jul. 2014. doi: 10.6028/NIST.IR.7972.

M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4510–4520, Jan. 2018, doi: 10.1109/CVPR.2018.00474.

J. Lei, C. Gao, J. Hu, C. Gao, and N. Sang, “Orientation Adaptive YOLOv3 for Object Detection in Remote Sensing Images,” Springer, Cham, 2019, pp. 586–597.

K. Nguyen, N. T. Huynh, P. C. Nguyen, K. D. Nguyen, N. D. Vo, and T. V. Nguyen, “Detecting objects from space: an evaluation of deep-learning modern approaches,” Electron., vol. 9, no. 4, p. 583, Mar. 2020, doi: 10.3390/electronics9040583.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Dec. 2016, vol. 2016-Decem, pp. 770–778, doi: 10.1109/CVPR.2016.90.

M. Kerrisk, “Pidstat(1) – Linux manual page,” 2020. https://man7.org/linux/man-pages/man1/pidstat.1.html.

M. Elgendi et al., “The Performance of Deep Neural Networks in Differentiating Chest X-Rays of COVID-19 Patients From Other Bacterial and Viral Pneumonias,” Front. Med., vol. 7, Aug. 2020, doi: 10.3389/fmed.2020.00550.




DOI: http://dx.doi.org/10.18517/ijaseit.12.2.13801

Refbacks

  • There are currently no refbacks.



Published by INSIGHT - Indonesian Society for Knowledge and Human Development