Generic placeholder image

Current Chinese Computer Science

Editor-in-Chief

ISSN (Print): 2665-9972
ISSN (Online): 2665-9964

Research Article

Leaf Image Classification with the Aid of Transfer Learning: A Deep Learning Approach

Author(s): Srinivasa Rao Dammavalam*, Ramesh Babu Challagundla, Vangipuram Sravan Kiran, Rajasekhar Nuvvusetty, Lalith Bharadwaj Baru, Rohit Boddeda and Sai Vardhan Kanumolu

Volume 1, Issue 1, 2021

Published on: 11 August, 2020

Page: [61 - 76] Pages: 16

DOI: 10.2174/2665997201999200811150433

Abstract

Background: Crop diseases are a primary hazard to nutrient safety, which proves to be a serious problem in many places in the world due to the unavailability of essential aid. Typically agriculturalists or specialists perceive the plants with a naked eye for detection and identification of an illness. Machine vision models, in specific Convolutional Neural Networks (CNNs) have directed an impact in feature extraction to a greater extent. Since 2015, numerous solicitations for the automatic classification and recognition of crop illnesses have been established.

Methods: In this paper, we proposed, analyzed, and assessed various state-of-the-art models proposed over a decade. These models are pre-trained with the finest parameters where we modeled a design-oriented method with numerous leaf-images and classified them into infection and healthy class for each type of leaf independently.

Results: Through our examination, we concluded that VGG models stand-alone with many cited prototypes and give on par results. As declared, these VGG models (VGG16 and VGG19) are utilized for feature extraction, and further, we augmented a set of dense layers and train them consequently for classification. The performances of various machine vision prototypes were pictorially perceived and their sophisticated architecture is not only capable of extracting detailed features but also repressed many loop-holes. The performance is assessed and computed for several types of leaf images and the accuracy scores attained were more than 97.5% for VGG16 and 96.72% for VGG19.

Conclusion: AUC-ROC curves were portrayed to illustrate its inspiration in defining an accurate classification where VGG16 and VGG19 have at least 96.6% and 95% area under the curve (AUC) which resembles their robustness.

Keywords: Leaf classification, deep learning, transfer learning, automated plant diagnosis, CNNs.

Graphical Abstract

[1]
A. Camargo, and J.S. Smith, "Image pattern classification for the identification of disease causing agents in plants", Comput. Electron. Agric., vol. 66, no. 2, pp. 121-125, 2009.
[http://dx.doi.org/10.1016/j.compag.2009.01.003]
[2]
T. Rumpf, "Early detection and classification of plant diseases with support vector machines based on hyperspectral reflectance", Comput. Electron. Agric., vol. 74, no. 1, pp. 91-99, 2010.
[http://dx.doi.org/10.1016/j.compag.2010.06.009]
[3]
A. Bashish, Dheeb, Malik Braik, and Sulieman Bani-Ahmad. “A framework for detection and classification of plant leaf and stem diseases.” 2010 international conference on signal and image processing., IEEE, 2010.
[4]
H. Al-Hiary, "Fast and accurate detection and classification of plant diseases", Int. J. Comput. Appl., vol. 17, no. 1, pp. 31-38, 2011.
[5]
D. Al Bashish, M. Braik, and S. Bani-Ahmad, "Detection and classification of leaf diseases using K-means-based segmentation and", Information Technology Journal, vol. 10, no. 2, pp. 267-275, 2011.
[http://dx.doi.org/10.3923/itj.2011.267.275]
[6]
S. Arivazhagan, "Detection of unhealthy region of plant leaves and classification of plant leaf diseases using texture features", Agric. Eng. Int. CIGR J., vol. 15, no. 1, pp. 211-217, 2013.
[7]
S. Sladojevic, M. Arsenovic, A. Anderla, D. Culibrk, and D. Stefanovic, "Deep neural networks based recognition of plant diseases by leaf image classification", In: Comput. Intell. Neurosci., vol. 2016. 2016.3289801
[http://dx.doi.org/10.1155/2016/3289801] [PMID: 27418923]
[8]
S.P. Mohanty, D.P. Hughes, and M. Salathé, "Using deep learning for image-based plant disease detection", Front. Plant Sci., vol. 7, p. 1419, 2016.
[http://dx.doi.org/10.3389/fpls.2016.01419] [PMID: 27713752]
[9]
A. Krizhevsky, I. Sutskever, and G.E. Hinton, Imagenet classification with deep convolutional neural networks.Advances in neural information processing systems., 2012, pp. 1097-1105.
[10]
C. Szegedy, "Going deeper with convolutions", 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015pp. 1-9 Boston, MA, USA
[11]
V. Singh, and A.K. Misra, "Detection of plant leaf diseases using image segmentation and soft computing techniques", Inf. Process. Agric., vol. 4, no. 1, pp. 41-49, 2017.
[http://dx.doi.org/10.1016/j.inpa.2016.10.005]
[12]
Jihen Amara, Bassem Bouaziz, and Alsayed Algergawy, "A deep learning-based approach for banana leaf diseases classification", Datenbanksysteme für Business, Technologie und Web (BTW 2017)-Workshopband, 2017.
[13]
M.H. Saleem, J. Potgieter, and K. Mahmood Arif, "plant disease detection and classification by deep learning", Plants (Basel), vol. 8, no. 11, . E468, 2019
[http://dx.doi.org/10.3390/plants8110468] [PMID: 31683734]
[14]
N. Petrellis, "Mobile application for plant disease classification based on symptom signatures", Proceedings of the 21st Pan-Hellenic Conference on Informatics, 2017
[http://dx.doi.org/10.1145/3139367.3139368]
[15]
M. Brahimi, K. Boukhalfa, and A. Moussaoui, "Deep learning for tomato diseases: classification and symptoms visualization", Appl. Artif. Intell., vol. 31, no. 4, pp. 299-315, 2017.
[http://dx.doi.org/10.1080/08839514.2017.1315516]
[16]
D. Oppenheim, and G. Shani, "Potato disease classification using convolution neural networks", Adv. Anim. Biosci., vol. 8, no. 2, pp. 244-249, 2017.
[http://dx.doi.org/10.1017/S2040470017001376]
[17]
Alvaro Fuentes, "A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition", Sensors 17.9, 2017.
[http://dx.doi.org/10.3390/s17092022]
[18]
A. Camargo, and J.S. Smith, An image-processing based algorithm to automatically identify plant disease visual symptoms.Biosystems engineering 102.1, (2009):, pp. 9-21.
[19]
T.M. Prajwala, A. Pranathi, K.S. Ashritha, Nagaratna B. Chittaragi, and S. Koolagudi, "Tomato leaf disease detection using convolutional neural networks", IEEE Xplore, Eleventh International Conference on Contemporary Computing (IC3), 2018
[20]
W-S. Jeon, and S-Y. Rhee, "Plant Leaf Recognition Using a Convolution Neural Network", Int. J. Fuzzy Logic Intelligent Syst., vol. 17, no. 1, pp. 26-34, 2017.
[http://dx.doi.org/10.5391/IJFIS.2017.17.1.26]
[21]
P.M. Mainkar, S. Ghorpade, and M. Adawadkar, "Plant leaf disease detection and classification using image processing techniques", Int. J. Innovative Emerging Res. Engineering, vol. 2, no. 4, pp. 139-144, 2015.
[22]
S. Wallelign, M. Polceanu, and C. Buche, "soybean plant disease identification using convolutional neural network", Artificial Intelligence Research Society Conference (FLAIRS-31), 2018pp. 146-151
[23]
I. Goodfellow, H. Lee, Q.V. Le, A. Saxe, and A.Y. Ng, "Measuring invariances in deep networks", Adv. Neural Inform. Processings syst., pp. 646-654, 2009.
[24]
G. Larsson, M. Maire, and G. Shakhnarovich, "Fractalnet: Ultra-deep neural networks without residuals", ArXiv Preprint ArXiv:1605.07648, 2016.
[25]
C. Cortes, X. Gonzalvo, V. Kuznetsov, M. Mohri, and S. Yang, "Adanet: Adaptive structural learning of artificial neural networks", Proceedings of the 34th Int. Conference Machine Learning JMLR, vol. 70, 2017pp. 874-883
[26]
S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, "Aggregated residual transformations for deep neural networks", Proceedings of the IEEE conference on computer vision and pattern recognition, 2017pp. 1492-1500
[http://dx.doi.org/10.1109/CVPR.2017.634]
[27]
M. Tan, and Q.V. Le, "Efficientnet: Rethinking model scaling for convolutional neural networks", ArXiv Preprint ArXiv:1905.11946, 2019.
[28]
J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, "Graph neural networks: a review of methods and applications", arXiv preprint arXiv:1812.08434, 1812.
[29]
Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel, "Gated graph sequence neural networks", ArXiv Preprint ArXiv:1511.05493, 1511.
[30]
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, "Graph attention networks", ArXiv Preprint ArXiv:1710.10903, 1710.
[31]
R. Collobert, "J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural language processing (almost) from scratch", J. Mach. Learn. Res., vol. 12, no. Aug, pp. 2493-2537, 2011.
[32]
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition", Proceedings IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[33]
J. Deng, W. Dong, R. Socher, L-J. Li, and K. Li, Imagenet: A large-scale hierarchical image database., CVPR, 2009.
[34]
S.J. Pan, and Q. Yang, "A survey on transfer learning", IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345-1359, 2010.
[http://dx.doi.org/10.1109/TKDE.2009.191]
[35]
K. Weiss, T.M. Khoshgoftaar, and D. Wang, "A survey of transfer learning", J. Big Data, vol. 3, p. 9, 2016.
[http://dx.doi.org/10.1186/s40537-016-0043-6]
[36]
Y. Bengio, "Deep learning of representations for unsupervised and transfer learning", JMLR: Workshop Conference Proceedings, vol. 27, pp. 17-37, 2012.
[37]
G. Mesnil, Y. Dauphin, X. Glorot, S. Rifai, Y. Bengio, I. Goodfellow, E. Lavoie, X. Muller, G. Desjardins, D. Warde-Farley, and P. Vincent, "Unsupervised and transfer learning challenge: a deep learning approach", JMLR: Workshop and Conference Proceedings, vol. 27, pp. 97-111, 2012.
[38]
Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu, and M.S. Lew, "Deep learning for visual understanding: A review", Neurocomputing, vol. 187, pp. 27-48, 2016.
[http://dx.doi.org/10.1016/j.neucom.2015.09.116]
[39]
F.N. Iandola, S. Han, M.W. Moskewicz, K. Ashraf, W.J. Dally, and K. Keutzer, "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size", ArXiv Preprint ArXiv:1602.07360, 1602.
[40]
S.L. Smith, P.J. Kindermans, C. Ying, and Q.V. Le, "Don’t decay the learning rate, increase the batch size", ArXiv Preprint ArXiv:1711.00489, 1711.
[41]
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, Rethinking the inception architecture for computer vision., CVPR, 2016.
[http://dx.doi.org/10.1109/CVPR.2016.308]
[42]
R.K. Srivastava, K. Greff, and J. Schmidhuber, "Highway networks", ArXiv Preprint ArXiv:1505.00387, 1505.
[43]
J Moody, and C Darken, "Learning with localized receptive fields",
[44]
W. Luo, Y. Li, R. Urtasun, and R. Zemel, "Understanding the effective receptive field in deep convolutional neural networks", In: Adv. Neural Information Processing Syst., 2016, pp. 4898-4906.
[45]
S. Karen, and Z. andrew, "Very deep convolutional networks for large-scale image recognition", ArXiv 1409.1556., 2014.
[46]
K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for Image recognition", Computer Vision and Pattern Recognition, Dec., 2015.
[47]
K. He, and X. Zhang, "Identity mappings in deep residual networks", European conference on computer vision, 2016, pp. 630-645
[48]
G. Huang, Z. Liu, L. Van Der Maaten, and K.Q. Weinberger, "Densely connected convolutional networks", Proceedings of the IEEE conference on computer vision and pattern recognition, 2017pp. 4700-4708
[http://dx.doi.org/10.1109/CVPR.2017.243]
[49]
A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications., ArXiv, 2017.
[50]
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Chen, "MobileNetV2: Inverted Residuals and Linear Bottlenecks", 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018pp. 4510-4520 Salt Lake City, UT
[http://dx.doi.org/10.1109/CVPR.2018.00474]
[51]
F. Chollet, "Xception: deep learning with depthwise separable convolutions", Conference on Computer Vision and Pattern Recognition, 7 Oct. 2016
[52]
M. Lin, Q. Chen, and S. Yan, "Network in Network", CoRR, 2013.
[53]
Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning", Nature, vol. 521, pp. 436-444, 2015.
[54]
I. Goodfellow, Y. Bengio, and A. Courville, Deep learning.
[55]
C.Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu, "Deeply-supervised nets", Artificial Intelligence Statistics, pp. 562-570, 2015.
[56]
V. Nair, and G.E. Hinton, "Rectified linear units improve restricted boltzmann machines", In Proceedings of the 27th International Conference on International Conference on Machine Learning (ICML’10). Omnipress, Madison, WI, USA, pp. 807-814, 2010.
[57]
W. Liu, Y. Wen, Z. Yu, and M. Yang, "Large-margin softmax loss for convolutional neural networks", ICML, vol. 2, p. 7, 2016.
[58]
Y.L. Boureau, F. Bach, Y. LeCun, and J. Ponce, "Learning mid-level features for recognition", In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010, pp. 2559-2566. IEEE, 2010
[http://dx.doi.org/10.1109/CVPR.2010.5539963]
[59]
K. He, X. Zhang, S. Ren, and J. Sun, "Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition", In: IEEE Trans. Pattern Anal. Mach. Intell., vol. 37. 2015, pp. 1904-1916.
[http://dx.doi.org/10.1109/TPAMI.2015.2389824] [PMID: 26353135]
[60]
S.L. Smith, P.J. Kindermans, C. Ying, and Q.V. Le, "Don’t decay the learning rate, increase the batch size", ArXiv Preprint ArXiv:1711.00489, 1711.
[61]
E. Hoffer, I. Hubara, and D. Soudry, Train longer, generalize better: closing the generalization gap in large batch training of neural networksAdvances in Neural Information Processing Syst., 2017, pp. 1731-1741.
[62]
M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and M. Kudlur, "Tensorflow: A system for large-scale machine learning", 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), 2016pp. 265-283
[63]
Y. LeCun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, and L.D. Jackel, "Backpropagation applied to handwritten zip code recognition", Neural Comput., 1989.
[http://dx.doi.org/10.1162/neco.1989.1.4.541]
[64]
R. Hecht-Nielsen, Theory of the backpropagation neural network.Neural networks for perception., Academic Press, 1992, pp. 65-93.
[http://dx.doi.org/10.1016/B978-0-12-741252-8.50010-8]
[65]
D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning representations by back-propagating errors", Nature, vol. 323, no. 6088, pp. 533-536, 1986.
[66]
D. Kingma, and J. Ba, "Adam: A Method for Stochastic Optimization", International Conference on Learning Representations, 2014
[67]
S. Ioffe, and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shiftICML, 2015.
[68]
N. Srivastava, G.E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, Dropout: A simple way to prevent neural networks from overfitting., JMLR, 2014.
[69]
J. Davis, and M. Goadrich, "The relationship between Precision-Recall and ROC curves", Proceedings of the 23rd international conference on Machine learning, 2006, pp. 233-240
[http://dx.doi.org/10.1145/1143844.1143874]
[70]
K. Hajian-Tilaki, "Receiver operating characteristic (ROC) curve analysis for medical diagnostic test evaluation", Caspian J. Intern. Med., vol. 4, pp. 627-635, 2013.
[PMID: 24009950]

© 2024 Bentham Science Publishers | Privacy Policy